Test Report: Docker_Linux_crio_arm64 21895

                    
                      382ea0a147905a9644676f66ab1ed2cbc8737b3b:2025-11-15:42335
                    
                

Test fail (36/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.33
35 TestAddons/parallel/Registry 15.25
36 TestAddons/parallel/RegistryCreds 0.47
37 TestAddons/parallel/Ingress 146.11
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.41
41 TestAddons/parallel/CSI 48.62
42 TestAddons/parallel/Headlamp 3.68
43 TestAddons/parallel/CloudSpanner 5.57
44 TestAddons/parallel/LocalPath 8.96
45 TestAddons/parallel/NvidiaDevicePlugin 6.27
46 TestAddons/parallel/Yakd 6.27
97 TestFunctional/parallel/ServiceCmdConnect 603.79
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.96
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
135 TestFunctional/parallel/ServiceCmd/Format 0.6
136 TestFunctional/parallel/ServiceCmd/URL 0.51
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
191 TestJSONOutput/pause/Command 1.85
197 TestJSONOutput/unpause/Command 1.85
282 TestPause/serial/Pause 7.28
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.43
304 TestStartStop/group/old-k8s-version/serial/Pause 6.21
310 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.58
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.58
322 TestStartStop/group/no-preload/serial/Pause 7.95
328 TestStartStop/group/embed-certs/serial/Pause 7.49
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.57
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.22
342 TestStartStop/group/newest-cni/serial/Pause 6.37
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.22
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable volcano --alsologtostderr -v=1: exit status 11 (329.41003ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:35:29.127833  523296 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:35:29.128695  523296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:35:29.128715  523296 out.go:374] Setting ErrFile to fd 2...
	I1115 09:35:29.128721  523296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:35:29.129023  523296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:35:29.129341  523296 mustload.go:66] Loading cluster: addons-612806
	I1115 09:35:29.129799  523296 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:35:29.129823  523296 addons.go:607] checking whether the cluster is paused
	I1115 09:35:29.129971  523296 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:35:29.129990  523296 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:35:29.130489  523296 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:35:29.154962  523296 ssh_runner.go:195] Run: systemctl --version
	I1115 09:35:29.155012  523296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:35:29.189312  523296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:35:29.304374  523296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:35:29.304456  523296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:35:29.335257  523296 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:35:29.335278  523296 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:35:29.335283  523296 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:35:29.335287  523296 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:35:29.335291  523296 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:35:29.335294  523296 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:35:29.335299  523296 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:35:29.335302  523296 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:35:29.335305  523296 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:35:29.335312  523296 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:35:29.335324  523296 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:35:29.335330  523296 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:35:29.335333  523296 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:35:29.335337  523296 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:35:29.335346  523296 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:35:29.335355  523296 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:35:29.335362  523296 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:35:29.335367  523296 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:35:29.335370  523296 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:35:29.335374  523296 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:35:29.335378  523296 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:35:29.335382  523296 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:35:29.335385  523296 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:35:29.335388  523296 cri.go:89] found id: ""
	I1115 09:35:29.335438  523296 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:35:29.350740  523296 out.go:203] 
	W1115 09:35:29.353741  523296 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:35:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:35:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:35:29.353781  523296 out.go:285] * 
	* 
	W1115 09:35:29.360676  523296 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:35:29.363653  523296 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.790642ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002898694s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004045089s
addons_test.go:392: (dbg) Run:  kubectl --context addons-612806 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-612806 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-612806 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.657904079s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 ip
2025/11/15 09:35:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable registry --alsologtostderr -v=1: exit status 11 (309.534311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:35:54.678267  523834 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:35:54.679150  523834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:35:54.679165  523834 out.go:374] Setting ErrFile to fd 2...
	I1115 09:35:54.679171  523834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:35:54.679413  523834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:35:54.679685  523834 mustload.go:66] Loading cluster: addons-612806
	I1115 09:35:54.680072  523834 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:35:54.680091  523834 addons.go:607] checking whether the cluster is paused
	I1115 09:35:54.680201  523834 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:35:54.680246  523834 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:35:54.680713  523834 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:35:54.712839  523834 ssh_runner.go:195] Run: systemctl --version
	I1115 09:35:54.712898  523834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:35:54.757255  523834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:35:54.860360  523834 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:35:54.860448  523834 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:35:54.890795  523834 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:35:54.890827  523834 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:35:54.890833  523834 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:35:54.890837  523834 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:35:54.890840  523834 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:35:54.890844  523834 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:35:54.890847  523834 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:35:54.890873  523834 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:35:54.890877  523834 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:35:54.890884  523834 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:35:54.890894  523834 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:35:54.890898  523834 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:35:54.890901  523834 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:35:54.890905  523834 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:35:54.890908  523834 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:35:54.890921  523834 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:35:54.890929  523834 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:35:54.890961  523834 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:35:54.890967  523834 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:35:54.890972  523834 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:35:54.890978  523834 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:35:54.890984  523834 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:35:54.890988  523834 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:35:54.890992  523834 cri.go:89] found id: ""
	I1115 09:35:54.891052  523834 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:35:54.905582  523834 out.go:203] 
	W1115 09:35:54.908608  523834 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:35:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:35:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:35:54.908643  523834 out.go:285] * 
	* 
	W1115 09:35:54.915658  523834 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:35:54.918723  523834 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.25s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.356877ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-612806
addons_test.go:332: (dbg) Run:  kubectl --context addons-612806 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (258.278232ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:36:50.054561  525899 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:36:50.055449  525899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:50.055505  525899 out.go:374] Setting ErrFile to fd 2...
	I1115 09:36:50.055527  525899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:50.055888  525899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:36:50.056297  525899 mustload.go:66] Loading cluster: addons-612806
	I1115 09:36:50.056852  525899 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:50.056895  525899 addons.go:607] checking whether the cluster is paused
	I1115 09:36:50.057131  525899 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:50.057169  525899 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:36:50.057752  525899 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:36:50.075437  525899 ssh_runner.go:195] Run: systemctl --version
	I1115 09:36:50.075495  525899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:36:50.093179  525899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:36:50.200227  525899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:36:50.200313  525899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:36:50.235065  525899 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:36:50.235088  525899 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:36:50.235099  525899 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:36:50.235103  525899 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:36:50.235106  525899 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:36:50.235110  525899 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:36:50.235113  525899 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:36:50.235116  525899 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:36:50.235119  525899 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:36:50.235129  525899 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:36:50.235133  525899 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:36:50.235137  525899 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:36:50.235140  525899 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:36:50.235148  525899 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:36:50.235152  525899 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:36:50.235157  525899 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:36:50.235164  525899 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:36:50.235168  525899 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:36:50.235172  525899 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:36:50.235175  525899 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:36:50.235179  525899 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:36:50.235182  525899 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:36:50.235186  525899 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:36:50.235189  525899 cri.go:89] found id: ""
	I1115 09:36:50.235246  525899 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:36:50.249914  525899 out.go:203] 
	W1115 09:36:50.252815  525899 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:36:50.252841  525899 out.go:285] * 
	* 
	W1115 09:36:50.259503  525899 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:36:50.262290  525899 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-612806 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-612806 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-612806 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [20ed80a3-2d56-49ad-8a73-282fe8a067de] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [20ed80a3-2d56-49ad-8a73-282fe8a067de] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003491504s
I1115 09:36:26.472956  516637 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.835985456s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-612806 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-612806
helpers_test.go:243: (dbg) docker inspect addons-612806:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430",
	        "Created": "2025-11-15T09:33:13.482696763Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:33:13.542000098Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/hostname",
	        "HostsPath": "/var/lib/docker/containers/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/hosts",
	        "LogPath": "/var/lib/docker/containers/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430-json.log",
	        "Name": "/addons-612806",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-612806:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-612806",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430",
	                "LowerDir": "/var/lib/docker/overlay2/780588ea12473a9083fc48c5b25195aa8462a5461ebe02fd908aafa2897e91a8-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/780588ea12473a9083fc48c5b25195aa8462a5461ebe02fd908aafa2897e91a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/780588ea12473a9083fc48c5b25195aa8462a5461ebe02fd908aafa2897e91a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/780588ea12473a9083fc48c5b25195aa8462a5461ebe02fd908aafa2897e91a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-612806",
	                "Source": "/var/lib/docker/volumes/addons-612806/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-612806",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-612806",
	                "name.minikube.sigs.k8s.io": "addons-612806",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d0509765267724c1eaf51396ff56b0e41c7fb1cb402b4a0332ae82c3be717b7",
	            "SandboxKey": "/var/run/docker/netns/7d0509765267",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-612806": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:56:19:f8:d0:55",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ac6b3eeed3ac962be623bbf517b0be3ce2c94e3e1771253d91fbecf4ee0b09a9",
	                    "EndpointID": "da244236754e67436b73e71d65d108b376261c79d6f38e8b3905dc985deb8912",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-612806",
	                        "438186eb0f36"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-612806 -n addons-612806
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-612806 logs -n 25: (1.968221602s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-650018                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-650018 │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ start   │ --download-only -p binary-mirror-339675 --alsologtostderr --binary-mirror http://127.0.0.1:41649 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-339675   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ delete  │ -p binary-mirror-339675                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-339675   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ addons  │ enable dashboard -p addons-612806                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ addons  │ disable dashboard -p addons-612806                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ start   │ -p addons-612806 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:35 UTC │
	│ addons  │ addons-612806 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ addons  │ addons-612806 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ addons  │ addons-612806 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ addons  │ addons-612806 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ ip      │ addons-612806 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │ 15 Nov 25 09:35 UTC │
	│ addons  │ addons-612806 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ ssh     │ addons-612806 ssh cat /opt/local-path-provisioner/pvc-656ebd50-b53f-48f0-84f4-4943fda1a953_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │ 15 Nov 25 09:36 UTC │
	│ addons  │ addons-612806 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ enable headlamp -p addons-612806 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-612806 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-612806 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-612806 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-612806 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ ssh     │ addons-612806 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-612806 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-612806 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-612806                                                                                                                                                                                                                                                                                                                                                                                           │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │ 15 Nov 25 09:36 UTC │
	│ addons  │ addons-612806 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ ip      │ addons-612806 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │ 15 Nov 25 09:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:32:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:32:47.921727  517398 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:32:47.921836  517398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:32:47.921850  517398 out.go:374] Setting ErrFile to fd 2...
	I1115 09:32:47.921856  517398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:32:47.922116  517398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:32:47.922562  517398 out.go:368] Setting JSON to false
	I1115 09:32:47.923398  517398 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15319,"bootTime":1763183849,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 09:32:47.923461  517398 start.go:143] virtualization:  
	I1115 09:32:47.926654  517398 out.go:179] * [addons-612806] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 09:32:47.930414  517398 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:32:47.930497  517398 notify.go:221] Checking for updates...
	I1115 09:32:47.935996  517398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:32:47.938886  517398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:32:47.941693  517398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 09:32:47.944556  517398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 09:32:47.947504  517398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:32:47.950540  517398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:32:47.982124  517398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 09:32:47.982250  517398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:32:48.044103  517398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-15 09:32:48.034751487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:32:48.044246  517398 docker.go:319] overlay module found
	I1115 09:32:48.047391  517398 out.go:179] * Using the docker driver based on user configuration
	I1115 09:32:48.050266  517398 start.go:309] selected driver: docker
	I1115 09:32:48.050286  517398 start.go:930] validating driver "docker" against <nil>
	I1115 09:32:48.050300  517398 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:32:48.051043  517398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:32:48.106634  517398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-15 09:32:48.097646606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:32:48.106795  517398 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:32:48.107060  517398 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:32:48.109995  517398 out.go:179] * Using Docker driver with root privileges
	I1115 09:32:48.112897  517398 cni.go:84] Creating CNI manager for ""
	I1115 09:32:48.112961  517398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:32:48.112975  517398 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:32:48.113054  517398 start.go:353] cluster config:
	{Name:addons-612806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1115 09:32:48.116089  517398 out.go:179] * Starting "addons-612806" primary control-plane node in "addons-612806" cluster
	I1115 09:32:48.118873  517398 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:32:48.121725  517398 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:32:48.124544  517398 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:32:48.124594  517398 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 09:32:48.124609  517398 cache.go:65] Caching tarball of preloaded images
	I1115 09:32:48.124616  517398 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:32:48.124692  517398 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 09:32:48.124703  517398 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:32:48.125051  517398 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/config.json ...
	I1115 09:32:48.125082  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/config.json: {Name:mk63094cae3e06c4d6bba640c475a86257cf6dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:32:48.140410  517398 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:32:48.140538  517398 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:32:48.140558  517398 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 09:32:48.140563  517398 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 09:32:48.140571  517398 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 09:32:48.140577  517398 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1115 09:33:05.954667  517398 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1115 09:33:05.954707  517398 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:33:05.954737  517398 start.go:360] acquireMachinesLock for addons-612806: {Name:mk9f453cd28739ad7906c1b688d41cb5ec60c803 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:33:05.954863  517398 start.go:364] duration metric: took 107.944µs to acquireMachinesLock for "addons-612806"
	I1115 09:33:05.954890  517398 start.go:93] Provisioning new machine with config: &{Name:addons-612806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:05.954964  517398 start.go:125] createHost starting for "" (driver="docker")
	I1115 09:33:05.958411  517398 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1115 09:33:05.958658  517398 start.go:159] libmachine.API.Create for "addons-612806" (driver="docker")
	I1115 09:33:05.958706  517398 client.go:173] LocalClient.Create starting
	I1115 09:33:05.958840  517398 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem
	I1115 09:33:06.449363  517398 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem
	I1115 09:33:06.750852  517398 cli_runner.go:164] Run: docker network inspect addons-612806 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 09:33:06.769699  517398 cli_runner.go:211] docker network inspect addons-612806 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 09:33:06.769781  517398 network_create.go:284] running [docker network inspect addons-612806] to gather additional debugging logs...
	I1115 09:33:06.769802  517398 cli_runner.go:164] Run: docker network inspect addons-612806
	W1115 09:33:06.787494  517398 cli_runner.go:211] docker network inspect addons-612806 returned with exit code 1
	I1115 09:33:06.787525  517398 network_create.go:287] error running [docker network inspect addons-612806]: docker network inspect addons-612806: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-612806 not found
	I1115 09:33:06.787552  517398 network_create.go:289] output of [docker network inspect addons-612806]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-612806 not found
	
	** /stderr **
	I1115 09:33:06.787651  517398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:06.805119  517398 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a7170}
	I1115 09:33:06.805167  517398 network_create.go:124] attempt to create docker network addons-612806 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 09:33:06.805223  517398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-612806 addons-612806
	I1115 09:33:06.861275  517398 network_create.go:108] docker network addons-612806 192.168.49.0/24 created
	I1115 09:33:06.861310  517398 kic.go:121] calculated static IP "192.168.49.2" for the "addons-612806" container
	I1115 09:33:06.861383  517398 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 09:33:06.876250  517398 cli_runner.go:164] Run: docker volume create addons-612806 --label name.minikube.sigs.k8s.io=addons-612806 --label created_by.minikube.sigs.k8s.io=true
	I1115 09:33:06.893958  517398 oci.go:103] Successfully created a docker volume addons-612806
	I1115 09:33:06.894047  517398 cli_runner.go:164] Run: docker run --rm --name addons-612806-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-612806 --entrypoint /usr/bin/test -v addons-612806:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 09:33:09.001478  517398 cli_runner.go:217] Completed: docker run --rm --name addons-612806-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-612806 --entrypoint /usr/bin/test -v addons-612806:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (2.107383122s)
	I1115 09:33:09.001517  517398 oci.go:107] Successfully prepared a docker volume addons-612806
	I1115 09:33:09.001582  517398 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:09.001637  517398 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 09:33:09.001708  517398 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-612806:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 09:33:13.415738  517398 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-612806:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413986901s)
	I1115 09:33:13.415771  517398 kic.go:203] duration metric: took 4.414130905s to extract preloaded images to volume ...
	W1115 09:33:13.415902  517398 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 09:33:13.416016  517398 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 09:33:13.468669  517398 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-612806 --name addons-612806 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-612806 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-612806 --network addons-612806 --ip 192.168.49.2 --volume addons-612806:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 09:33:13.753264  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Running}}
	I1115 09:33:13.778203  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:13.809391  517398 cli_runner.go:164] Run: docker exec addons-612806 stat /var/lib/dpkg/alternatives/iptables
	I1115 09:33:13.860669  517398 oci.go:144] the created container "addons-612806" has a running status.
	I1115 09:33:13.860697  517398 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa...
	I1115 09:33:14.068157  517398 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 09:33:14.115128  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:14.137779  517398 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 09:33:14.137797  517398 kic_runner.go:114] Args: [docker exec --privileged addons-612806 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 09:33:14.227504  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:14.247839  517398 machine.go:94] provisionDockerMachine start ...
	I1115 09:33:14.247934  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:14.265412  517398 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:14.265772  517398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1115 09:33:14.265789  517398 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:33:14.266361  517398 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50302->127.0.0.1:33498: read: connection reset by peer
	I1115 09:33:17.417177  517398 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-612806
	
	I1115 09:33:17.417203  517398 ubuntu.go:182] provisioning hostname "addons-612806"
	I1115 09:33:17.417266  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:17.435189  517398 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:17.435497  517398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1115 09:33:17.435513  517398 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-612806 && echo "addons-612806" | sudo tee /etc/hostname
	I1115 09:33:17.594747  517398 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-612806
	
	I1115 09:33:17.594823  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:17.612484  517398 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:17.612798  517398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1115 09:33:17.612824  517398 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-612806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-612806/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-612806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:33:17.765683  517398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:33:17.765747  517398 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 09:33:17.765783  517398 ubuntu.go:190] setting up certificates
	I1115 09:33:17.765793  517398 provision.go:84] configureAuth start
	I1115 09:33:17.765854  517398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-612806
	I1115 09:33:17.782355  517398 provision.go:143] copyHostCerts
	I1115 09:33:17.782429  517398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 09:33:17.782542  517398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 09:33:17.782602  517398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 09:33:17.782657  517398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.addons-612806 san=[127.0.0.1 192.168.49.2 addons-612806 localhost minikube]
	I1115 09:33:18.076496  517398 provision.go:177] copyRemoteCerts
	I1115 09:33:18.076564  517398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:33:18.076613  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.095650  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.201537  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:33:18.220056  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:33:18.237223  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:33:18.254443  517398 provision.go:87] duration metric: took 488.624159ms to configureAuth
	I1115 09:33:18.254469  517398 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:33:18.254653  517398 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:18.254765  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.271576  517398 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:18.271888  517398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1115 09:33:18.271909  517398 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:33:18.532098  517398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:33:18.532185  517398 machine.go:97] duration metric: took 4.284309468s to provisionDockerMachine
	I1115 09:33:18.532228  517398 client.go:176] duration metric: took 12.573496901s to LocalClient.Create
	I1115 09:33:18.532282  517398 start.go:167] duration metric: took 12.573625833s to libmachine.API.Create "addons-612806"
	I1115 09:33:18.532312  517398 start.go:293] postStartSetup for "addons-612806" (driver="docker")
	I1115 09:33:18.532336  517398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:33:18.532440  517398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:33:18.532550  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.551054  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.657690  517398 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:33:18.660893  517398 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:33:18.660922  517398 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:33:18.660934  517398 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 09:33:18.660998  517398 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 09:33:18.661027  517398 start.go:296] duration metric: took 128.697178ms for postStartSetup
	I1115 09:33:18.661336  517398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-612806
	I1115 09:33:18.677397  517398 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/config.json ...
	I1115 09:33:18.677908  517398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:33:18.677969  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.694177  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.794973  517398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:33:18.799787  517398 start.go:128] duration metric: took 12.844807465s to createHost
	I1115 09:33:18.799812  517398 start.go:83] releasing machines lock for "addons-612806", held for 12.844939161s
	I1115 09:33:18.799901  517398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-612806
	I1115 09:33:18.816954  517398 ssh_runner.go:195] Run: cat /version.json
	I1115 09:33:18.817021  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.817312  517398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:33:18.817368  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.839313  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.847284  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.941322  517398 ssh_runner.go:195] Run: systemctl --version
	I1115 09:33:19.034552  517398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:33:19.069255  517398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:33:19.073580  517398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:33:19.073669  517398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:33:19.101444  517398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 09:33:19.101472  517398 start.go:496] detecting cgroup driver to use...
	I1115 09:33:19.101504  517398 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 09:33:19.101553  517398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:33:19.118505  517398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:33:19.131953  517398 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:33:19.132018  517398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:33:19.149518  517398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:33:19.168035  517398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:33:19.291841  517398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:33:19.415217  517398 docker.go:234] disabling docker service ...
	I1115 09:33:19.415384  517398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:33:19.437936  517398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:33:19.451712  517398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:33:19.562751  517398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:33:19.693859  517398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:33:19.708277  517398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:33:19.722241  517398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:33:19.722315  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.730811  517398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 09:33:19.730880  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.739749  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.748038  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.757020  517398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:33:19.765640  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.774629  517398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.788433  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.797397  517398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:33:19.805084  517398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:33:19.812627  517398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:19.920312  517398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:20.046497  517398 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:20.046635  517398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:20.050756  517398 start.go:564] Will wait 60s for crictl version
	I1115 09:33:20.050865  517398 ssh_runner.go:195] Run: which crictl
	I1115 09:33:20.054616  517398 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:20.084442  517398 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:20.084644  517398 ssh_runner.go:195] Run: crio --version
	I1115 09:33:20.116145  517398 ssh_runner.go:195] Run: crio --version
	I1115 09:33:20.147765  517398 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:20.150791  517398 cli_runner.go:164] Run: docker network inspect addons-612806 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:20.167992  517398 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:20.172069  517398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:20.181986  517398 kubeadm.go:884] updating cluster {Name:addons-612806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:33:20.182120  517398 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:20.182178  517398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:33:20.219935  517398 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:33:20.219960  517398 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:33:20.220021  517398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:33:20.247339  517398 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:33:20.247364  517398 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:33:20.247374  517398 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 09:33:20.247462  517398 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-612806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:20.247554  517398 ssh_runner.go:195] Run: crio config
	I1115 09:33:20.299720  517398 cni.go:84] Creating CNI manager for ""
	I1115 09:33:20.299746  517398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:33:20.299769  517398 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:33:20.299796  517398 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-612806 NodeName:addons-612806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:33:20.299925  517398 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-612806"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:33:20.300000  517398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:20.307692  517398 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:20.307795  517398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:33:20.315195  517398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:20.327488  517398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:20.339935  517398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1115 09:33:20.355338  517398 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:20.359287  517398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:20.369004  517398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:20.476582  517398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:20.492435  517398 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806 for IP: 192.168.49.2
	I1115 09:33:20.492468  517398 certs.go:195] generating shared ca certs ...
	I1115 09:33:20.492484  517398 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:20.492662  517398 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 09:33:20.799976  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt ...
	I1115 09:33:20.800013  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt: {Name:mk70893942d6e5c2da13e34d090b8424f8dc0738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:20.800253  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key ...
	I1115 09:33:20.800269  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key: {Name:mk18fb438bed5d4ced16b917b9ea2ab121395897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:20.800362  517398 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 09:33:21.220692  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt ...
	I1115 09:33:21.220722  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt: {Name:mkd960b9e97f7373aafc1d971778195865fc5ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.220902  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key ...
	I1115 09:33:21.220915  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key: {Name:mk4d75450c4a986fcc17d4d30847824e0ed28462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.221008  517398 certs.go:257] generating profile certs ...
	I1115 09:33:21.221075  517398 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.key
	I1115 09:33:21.221094  517398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt with IP's: []
	I1115 09:33:21.475433  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt ...
	I1115 09:33:21.475466  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: {Name:mk0d77c5fb4b349381e4035e01d6f84b4212981f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.475650  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.key ...
	I1115 09:33:21.475663  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.key: {Name:mkcf973be359fb928e69db1eb448a2e1aea313a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.475777  517398 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key.3adf97c9
	I1115 09:33:21.475799  517398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt.3adf97c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1115 09:33:21.713262  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt.3adf97c9 ...
	I1115 09:33:21.713300  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt.3adf97c9: {Name:mkf1b0c2b5c3f7a2845479f6c216c14594a7a4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.713467  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key.3adf97c9 ...
	I1115 09:33:21.713482  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key.3adf97c9: {Name:mk13756454d634edd32ed6b4903dd27d1a7477e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.713567  517398 certs.go:382] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt.3adf97c9 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt
	I1115 09:33:21.713672  517398 certs.go:386] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key.3adf97c9 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key
	I1115 09:33:21.713727  517398 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.key
	I1115 09:33:21.713746  517398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.crt with IP's: []
	I1115 09:33:21.913701  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.crt ...
	I1115 09:33:21.913731  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.crt: {Name:mk1f5058333308a06bc34648e75681e4f6ab5d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.913917  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.key ...
	I1115 09:33:21.913933  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.key: {Name:mkac56f364f1c4bf572f7f529b4d070437967526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.914123  517398 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 09:33:21.914164  517398 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:21.914193  517398 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:21.914231  517398 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 09:33:21.914784  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:21.932647  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:21.950636  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:21.967426  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:21.983907  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:33:22.001629  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 09:33:22.021241  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:33:22.039727  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:33:22.058312  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:22.079337  517398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:33:22.095309  517398 ssh_runner.go:195] Run: openssl version
	I1115 09:33:22.102647  517398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:22.111338  517398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:22.115537  517398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:22.115671  517398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:22.159071  517398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:22.167574  517398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:22.171210  517398 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:33:22.171261  517398 kubeadm.go:401] StartCluster: {Name:addons-612806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:33:22.171348  517398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:33:22.171421  517398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:33:22.200935  517398 cri.go:89] found id: ""
	I1115 09:33:22.201012  517398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:33:22.208715  517398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:33:22.216559  517398 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:33:22.216625  517398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:33:22.224580  517398 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:33:22.224603  517398 kubeadm.go:158] found existing configuration files:
	
	I1115 09:33:22.224659  517398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:33:22.232479  517398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:33:22.232546  517398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:33:22.239825  517398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:33:22.248185  517398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:33:22.248299  517398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:33:22.255766  517398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:33:22.263357  517398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:33:22.263422  517398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:33:22.270904  517398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:33:22.278328  517398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:33:22.278405  517398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:33:22.285531  517398 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:33:22.344032  517398 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 09:33:22.344383  517398 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 09:33:22.409729  517398 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:33:39.228806  517398 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:33:39.228868  517398 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:33:39.228962  517398 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:33:39.229039  517398 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 09:33:39.229079  517398 kubeadm.go:319] OS: Linux
	I1115 09:33:39.229130  517398 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:33:39.229185  517398 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 09:33:39.229238  517398 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:33:39.229292  517398 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:33:39.229346  517398 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:33:39.229402  517398 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:33:39.229457  517398 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:33:39.229514  517398 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:33:39.229568  517398 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 09:33:39.229669  517398 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:33:39.229772  517398 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:33:39.229865  517398 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:33:39.229930  517398 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:33:39.232840  517398 out.go:252]   - Generating certificates and keys ...
	I1115 09:33:39.232937  517398 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:33:39.233007  517398 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:33:39.233082  517398 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:33:39.233144  517398 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:33:39.233210  517398 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:33:39.233265  517398 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:33:39.233323  517398 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:33:39.233443  517398 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-612806 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:33:39.233500  517398 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:33:39.233643  517398 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-612806 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:33:39.233714  517398 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:33:39.233851  517398 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:33:39.233922  517398 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:33:39.233993  517398 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:33:39.234059  517398 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:33:39.234125  517398 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:33:39.234196  517398 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:33:39.234273  517398 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:33:39.234345  517398 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:33:39.234433  517398 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:33:39.234518  517398 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:33:39.237443  517398 out.go:252]   - Booting up control plane ...
	I1115 09:33:39.237543  517398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:33:39.237670  517398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:33:39.237785  517398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:33:39.237916  517398 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:33:39.238049  517398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:33:39.238178  517398 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:33:39.238273  517398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:33:39.238323  517398 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:33:39.238465  517398 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:33:39.238588  517398 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:33:39.238678  517398 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001342507s
	I1115 09:33:39.238833  517398 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:33:39.238954  517398 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 09:33:39.239065  517398 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:33:39.239153  517398 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:33:39.239250  517398 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.927007073s
	I1115 09:33:39.239367  517398 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.135548899s
	I1115 09:33:39.239455  517398 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501476454s
	I1115 09:33:39.239612  517398 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:33:39.239792  517398 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:33:39.239859  517398 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:33:39.240058  517398 kubeadm.go:319] [mark-control-plane] Marking the node addons-612806 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:33:39.240122  517398 kubeadm.go:319] [bootstrap-token] Using token: g7gars.xwgdud00ybfiyvvb
	I1115 09:33:39.243230  517398 out.go:252]   - Configuring RBAC rules ...
	I1115 09:33:39.243397  517398 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:33:39.243513  517398 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:33:39.243707  517398 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:33:39.243884  517398 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:33:39.244013  517398 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:33:39.244131  517398 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:33:39.244274  517398 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:33:39.244325  517398 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:33:39.244377  517398 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:33:39.244392  517398 kubeadm.go:319] 
	I1115 09:33:39.244458  517398 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:33:39.244468  517398 kubeadm.go:319] 
	I1115 09:33:39.244560  517398 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:33:39.244574  517398 kubeadm.go:319] 
	I1115 09:33:39.244607  517398 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:33:39.244686  517398 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:33:39.244755  517398 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:33:39.244760  517398 kubeadm.go:319] 
	I1115 09:33:39.244826  517398 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:33:39.244838  517398 kubeadm.go:319] 
	I1115 09:33:39.244902  517398 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:33:39.244917  517398 kubeadm.go:319] 
	I1115 09:33:39.244981  517398 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:33:39.245082  517398 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:33:39.245167  517398 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:33:39.245174  517398 kubeadm.go:319] 
	I1115 09:33:39.245275  517398 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:33:39.245377  517398 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:33:39.245387  517398 kubeadm.go:319] 
	I1115 09:33:39.245484  517398 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token g7gars.xwgdud00ybfiyvvb \
	I1115 09:33:39.245712  517398 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 09:33:39.245739  517398 kubeadm.go:319] 	--control-plane 
	I1115 09:33:39.245746  517398 kubeadm.go:319] 
	I1115 09:33:39.245836  517398 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:33:39.245847  517398 kubeadm.go:319] 
	I1115 09:33:39.245935  517398 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token g7gars.xwgdud00ybfiyvvb \
	I1115 09:33:39.246067  517398 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 09:33:39.246080  517398 cni.go:84] Creating CNI manager for ""
	I1115 09:33:39.246088  517398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:33:39.249194  517398 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 09:33:39.252118  517398 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:33:39.256736  517398 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 09:33:39.256759  517398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:33:39.270699  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:33:39.553049  517398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:33:39.553251  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:39.553371  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-612806 minikube.k8s.io/updated_at=2025_11_15T09_33_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=addons-612806 minikube.k8s.io/primary=true
	I1115 09:33:39.695839  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:39.695906  517398 ops.go:34] apiserver oom_adj: -16
	I1115 09:33:40.196580  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:40.696449  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:41.195959  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:41.696866  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:42.196288  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:42.696296  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:43.196584  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:43.696796  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:44.196346  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:44.280805  517398 kubeadm.go:1114] duration metric: took 4.727601451s to wait for elevateKubeSystemPrivileges
	I1115 09:33:44.280837  517398 kubeadm.go:403] duration metric: took 22.109581087s to StartCluster
	I1115 09:33:44.280856  517398 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:44.280974  517398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:33:44.281348  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:44.281552  517398 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:44.281718  517398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:33:44.281962  517398 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:44.281947  517398 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 09:33:44.282061  517398 addons.go:70] Setting yakd=true in profile "addons-612806"
	I1115 09:33:44.282073  517398 addons.go:70] Setting inspektor-gadget=true in profile "addons-612806"
	I1115 09:33:44.282086  517398 addons.go:70] Setting metrics-server=true in profile "addons-612806"
	I1115 09:33:44.282088  517398 addons.go:239] Setting addon inspektor-gadget=true in "addons-612806"
	I1115 09:33:44.282094  517398 addons.go:239] Setting addon metrics-server=true in "addons-612806"
	I1115 09:33:44.282111  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.282116  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.282225  517398 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-612806"
	I1115 09:33:44.282233  517398 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-612806"
	I1115 09:33:44.282247  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.282595  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.282688  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.283339  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.285789  517398 addons.go:70] Setting registry=true in profile "addons-612806"
	I1115 09:33:44.285860  517398 addons.go:239] Setting addon registry=true in "addons-612806"
	I1115 09:33:44.286313  517398 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-612806"
	I1115 09:33:44.286352  517398 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-612806"
	I1115 09:33:44.286394  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.286163  517398 addons.go:70] Setting registry-creds=true in profile "addons-612806"
	I1115 09:33:44.287325  517398 addons.go:239] Setting addon registry-creds=true in "addons-612806"
	I1115 09:33:44.287355  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.287801  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.298045  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.286327  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.298679  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.286175  517398 addons.go:70] Setting storage-provisioner=true in profile "addons-612806"
	I1115 09:33:44.286179  517398 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-612806"
	I1115 09:33:44.286183  517398 addons.go:70] Setting volcano=true in profile "addons-612806"
	I1115 09:33:44.286190  517398 addons.go:70] Setting volumesnapshots=true in profile "addons-612806"
	I1115 09:33:44.286233  517398 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:44.298708  517398 addons.go:70] Setting cloud-spanner=true in profile "addons-612806"
	I1115 09:33:44.298926  517398 addons.go:239] Setting addon cloud-spanner=true in "addons-612806"
	I1115 09:33:44.298976  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.299501  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.308736  517398 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-612806"
	I1115 09:33:44.309167  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.298717  517398 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-612806"
	I1115 09:33:44.353004  517398 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-612806"
	I1115 09:33:44.353051  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.353509  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.298738  517398 addons.go:70] Setting default-storageclass=true in profile "addons-612806"
	I1115 09:33:44.380926  517398 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-612806"
	I1115 09:33:44.298742  517398 addons.go:70] Setting gcp-auth=true in profile "addons-612806"
	I1115 09:33:44.298746  517398 addons.go:70] Setting ingress=true in profile "addons-612806"
	I1115 09:33:44.298749  517398 addons.go:70] Setting ingress-dns=true in profile "addons-612806"
	I1115 09:33:44.282076  517398 addons.go:239] Setting addon yakd=true in "addons-612806"
	I1115 09:33:44.331906  517398 addons.go:239] Setting addon volcano=true in "addons-612806"
	I1115 09:33:44.331924  517398 addons.go:239] Setting addon volumesnapshots=true in "addons-612806"
	I1115 09:33:44.340778  517398 addons.go:239] Setting addon storage-provisioner=true in "addons-612806"
	I1115 09:33:44.381738  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.410261  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.418848  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.410599  517398 mustload.go:66] Loading cluster: addons-612806
	I1115 09:33:44.434165  517398 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:44.434499  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.442419  517398 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 09:33:44.449785  517398 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:33:44.449864  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 09:33:44.449971  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.410612  517398 addons.go:239] Setting addon ingress=true in "addons-612806"
	I1115 09:33:44.455731  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.457290  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.410620  517398 addons.go:239] Setting addon ingress-dns=true in "addons-612806"
	I1115 09:33:44.470111  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.470716  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.410714  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.473426  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.474540  517398 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 09:33:44.477571  517398 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:33:44.477687  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1115 09:33:44.477798  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.481319  517398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:44.486713  517398 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 09:33:44.410731  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.492657  517398 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 09:33:44.492682  517398 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 09:33:44.492762  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.410743  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.501296  517398 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 09:33:44.502652  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.511444  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.537192  517398 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-612806"
	I1115 09:33:44.537235  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.537656  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.539621  517398 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 09:33:44.542725  517398 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:33:44.542751  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 09:33:44.542814  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.565694  517398 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:33:44.565714  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 09:33:44.565780  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.579738  517398 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 09:33:44.582717  517398 out.go:179]   - Using image docker.io/registry:3.0.0
	I1115 09:33:44.586139  517398 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 09:33:44.586171  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 09:33:44.586237  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.606856  517398 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 09:33:44.610045  517398 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 09:33:44.610076  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 09:33:44.610143  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.624714  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.646654  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.650087  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 09:33:44.650881  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.686561  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 09:33:44.697961  517398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1115 09:33:44.699506  517398 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 09:33:44.700895  517398 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 09:33:44.701793  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.705407  517398 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:33:44.705460  517398 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:33:44.709710  517398 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:33:44.709814  517398 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:33:44.709825  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:33:44.709898  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.712800  517398 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:33:44.712829  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 09:33:44.712893  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.719880  517398 addons.go:239] Setting addon default-storageclass=true in "addons-612806"
	I1115 09:33:44.719922  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.720341  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.729486  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 09:33:44.734208  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 09:33:44.737053  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 09:33:44.740228  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 09:33:44.743322  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 09:33:44.748884  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 09:33:44.756178  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 09:33:44.756203  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 09:33:44.756280  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.785705  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 09:33:44.788433  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 09:33:44.788460  517398 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 09:33:44.788534  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.814554  517398 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 09:33:44.819308  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 09:33:44.819334  517398 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 09:33:44.819437  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.827933  517398 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 09:33:44.834387  517398 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:33:44.834415  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 09:33:44.834487  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.862725  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.874686  517398 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 09:33:44.877715  517398 out.go:179]   - Using image docker.io/busybox:stable
	I1115 09:33:44.889696  517398 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:33:44.889718  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 09:33:44.889794  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.895013  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.917831  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.936291  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.940483  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.942786  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.954916  517398 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:33:44.954951  517398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:33:44.955025  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.977767  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.994345  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:45.021802  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	W1115 09:33:45.029857  517398 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:33:45.029899  517398 retry.go:31] will retry after 215.209683ms: ssh: handshake failed: EOF
	I1115 09:33:45.032243  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:45.035118  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:45.052970  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	W1115 09:33:45.057622  517398 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:33:45.057664  517398 retry.go:31] will retry after 142.954018ms: ssh: handshake failed: EOF
	I1115 09:33:45.081327  517398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1115 09:33:45.247017  517398 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:33:45.247105  517398 retry.go:31] will retry after 489.43704ms: ssh: handshake failed: EOF
	I1115 09:33:45.417575  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:33:45.531585  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:33:45.537705  517398 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 09:33:45.537765  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 09:33:45.567309  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:33:45.601724  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:33:45.638849  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 09:33:45.638924  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 09:33:45.658780  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:33:45.695120  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:33:45.698304  517398 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 09:33:45.698325  517398 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 09:33:45.714305  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:33:45.714588  517398 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 09:33:45.714602  517398 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 09:33:45.716903  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 09:33:45.791076  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 09:33:45.791153  517398 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 09:33:45.810296  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 09:33:45.810371  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 09:33:45.828281  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:33:45.834698  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:33:45.872977  517398 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:33:45.873049  517398 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 09:33:45.876876  517398 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:33:45.876946  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 09:33:45.922986  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 09:33:45.923074  517398 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 09:33:45.938551  517398 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.240549625s)
	I1115 09:33:45.938718  517398 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 09:33:45.940169  517398 node_ready.go:35] waiting up to 6m0s for node "addons-612806" to be "Ready" ...
	I1115 09:33:45.962032  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 09:33:45.962107  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 09:33:46.141500  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:33:46.166354  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:33:46.176231  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 09:33:46.176308  517398 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 09:33:46.223745  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 09:33:46.223823  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 09:33:46.322182  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:33:46.322252  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 09:33:46.353004  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 09:33:46.353078  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 09:33:46.446539  517398 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-612806" context rescaled to 1 replicas
	I1115 09:33:46.524758  517398 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 09:33:46.524780  517398 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 09:33:46.540083  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:33:46.564020  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 09:33:46.564038  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 09:33:46.852832  517398 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 09:33:46.852909  517398 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 09:33:46.893810  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 09:33:46.893885  517398 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 09:33:47.159494  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 09:33:47.159569  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 09:33:47.223789  517398 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 09:33:47.223865  517398 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 09:33:47.334646  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 09:33:47.334722  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 09:33:47.388244  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:33:47.388334  517398 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 09:33:47.404619  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:33:47.432072  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 09:33:47.432153  517398 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 09:33:47.448423  517398 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:33:47.448500  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 09:33:47.511339  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1115 09:33:47.944249  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:49.491074  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.959414557s)
	I1115 09:33:49.491239  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.923859239s)
	I1115 09:33:49.491283  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.073329296s)
	W1115 09:33:49.959268  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:50.480200  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.821334026s)
	I1115 09:33:50.480266  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.785128291s)
	I1115 09:33:50.480311  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.765988937s)
	I1115 09:33:50.480372  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.763412661s)
	I1115 09:33:50.480425  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.652074929s)
	I1115 09:33:50.480666  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.645906562s)
	I1115 09:33:50.480772  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.339199417s)
	I1115 09:33:50.480790  517398 addons.go:480] Verifying addon registry=true in "addons-612806"
	I1115 09:33:50.480993  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.879187595s)
	I1115 09:33:50.481016  517398 addons.go:480] Verifying addon ingress=true in "addons-612806"
	I1115 09:33:50.481277  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.314852365s)
	I1115 09:33:50.481306  517398 addons.go:480] Verifying addon metrics-server=true in "addons-612806"
	I1115 09:33:50.481345  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.941240735s)
	I1115 09:33:50.484232  517398 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-612806 service yakd-dashboard -n yakd-dashboard
	
	I1115 09:33:50.484318  517398 out.go:179] * Verifying ingress addon...
	I1115 09:33:50.484368  517398 out.go:179] * Verifying registry addon...
	I1115 09:33:50.488610  517398 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 09:33:50.489472  517398 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 09:33:50.500681  517398 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:33:50.500701  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:50.501201  517398 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 09:33:50.501216  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1115 09:33:50.509399  517398 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1115 09:33:50.823698  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.418980401s)
	I1115 09:33:50.823729  517398 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-612806"
	I1115 09:33:50.824016  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.312592568s)
	W1115 09:33:50.824072  517398 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:33:50.824105  517398 retry.go:31] will retry after 169.079578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:33:50.826966  517398 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 09:33:50.830679  517398 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 09:33:50.841283  517398 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:33:50.841315  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:50.992510  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:50.992923  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:50.993892  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:33:51.334994  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:51.492711  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:51.493019  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:51.834868  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:51.993415  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:51.994923  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:52.335112  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:52.381546  517398 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 09:33:52.381667  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:52.398641  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	W1115 09:33:52.443315  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:52.492956  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:52.493187  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:52.510648  517398 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 09:33:52.523479  517398 addons.go:239] Setting addon gcp-auth=true in "addons-612806"
	I1115 09:33:52.523531  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:52.523998  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:52.541012  517398 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 09:33:52.541072  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:52.561698  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:52.834255  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:52.992145  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:52.992607  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:53.336621  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:53.494731  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:53.495595  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:53.644965  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.651033239s)
	I1115 09:33:53.645039  517398 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.104004982s)
	I1115 09:33:53.648012  517398 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:33:53.650829  517398 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 09:33:53.653683  517398 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 09:33:53.653706  517398 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 09:33:53.667283  517398 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 09:33:53.667352  517398 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 09:33:53.680407  517398 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:33:53.680431  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 09:33:53.694718  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:33:53.834566  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:53.993043  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:53.993923  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:54.195140  517398 addons.go:480] Verifying addon gcp-auth=true in "addons-612806"
	I1115 09:33:54.198183  517398 out.go:179] * Verifying gcp-auth addon...
	I1115 09:33:54.201864  517398 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 09:33:54.204353  517398 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 09:33:54.204372  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:54.334962  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:33:54.443853  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:54.493029  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:54.493176  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:54.707206  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:54.834175  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:54.991856  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:54.993210  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:55.205355  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:55.334778  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:55.492751  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:55.493716  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:55.705661  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:55.834497  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:55.992505  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:55.992652  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:56.204862  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:56.333729  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:33:56.444002  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:56.492862  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:56.493348  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:56.707483  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:56.834618  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:56.991521  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:56.992536  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:57.206090  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:57.334017  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:57.492940  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:57.493548  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:57.708944  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:57.834676  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:57.992483  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:57.992878  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:58.205050  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:58.333929  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:58.492374  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:58.492659  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:58.705477  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:58.834388  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:33:58.943353  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:58.992870  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:58.992970  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:59.204659  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:59.334580  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:59.492650  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:59.492697  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:59.708687  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:59.834473  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:59.992449  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:59.992774  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:00.211099  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:00.336496  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:00.494499  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:00.495283  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:00.705181  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:00.834310  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:00.991917  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:00.992381  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:01.205561  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:01.334434  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:01.443227  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:01.493425  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:01.495552  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:01.706107  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:01.834231  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:01.992803  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:01.993248  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:02.205291  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:02.334528  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:02.492566  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:02.494266  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:02.708000  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:02.834055  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:02.992266  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:02.992424  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:03.205477  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:03.334964  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:03.443801  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:03.492265  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:03.493485  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:03.706056  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:03.833951  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:03.992390  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:03.992662  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:04.204490  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:04.334581  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:04.493309  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:04.493749  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:04.708209  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:04.834193  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:04.992524  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:04.992809  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:05.204849  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:05.333798  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:05.493297  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:05.494064  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:05.708202  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:05.834460  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:05.943717  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:05.993433  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:05.994210  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:06.205314  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:06.334413  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:06.493087  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:06.493166  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:06.707278  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:06.833959  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:06.991862  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:06.992807  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:07.204778  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:07.333823  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:07.492787  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:07.492997  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:07.710870  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:07.833780  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:07.943770  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:07.991871  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:07.992642  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:08.204662  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:08.333931  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:08.492811  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:08.493643  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:08.708256  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:08.834476  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:08.992617  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:08.992957  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:09.205733  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:09.334501  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:09.493477  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:09.493900  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:09.708422  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:09.834777  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:09.991373  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:09.992409  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:10.205422  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:10.334290  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:10.442932  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:10.493019  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:10.493098  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:10.707263  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:10.834379  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:10.993216  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:10.993348  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:11.204887  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:11.333665  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:11.493577  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:11.494272  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:11.705225  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:11.834175  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:11.992717  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:11.993264  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:12.205197  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:12.334193  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:12.443067  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:12.492711  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:12.493858  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:12.705081  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:12.834480  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:12.991791  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:12.991978  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:13.204828  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:13.333578  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:13.492541  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:13.493176  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:13.705807  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:13.835459  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:13.992903  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:13.992983  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:14.205310  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:14.334271  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:14.443901  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:14.492120  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:14.493108  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:14.709884  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:14.833520  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:14.992581  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:14.992906  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:15.204801  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:15.333529  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:15.492026  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:15.493646  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:15.708159  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:15.834196  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:15.991796  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:15.992906  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:16.205374  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:16.334339  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:16.492935  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:16.493243  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:16.705486  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:16.834416  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:16.943492  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:16.992623  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:16.992754  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:17.205822  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:17.333989  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:17.492805  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:17.493381  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:17.707199  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:17.834802  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:17.992040  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:17.992570  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:18.204952  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:18.333741  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:18.493696  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:18.494313  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:18.708347  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:18.834422  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:18.992456  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:18.992655  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:19.205464  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:19.334585  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:19.443541  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:19.491649  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:19.493495  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:19.705627  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:19.834721  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:19.992687  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:19.993038  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:20.204922  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:20.333768  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:20.492438  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:20.493377  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:20.705335  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:20.834701  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:20.991543  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:20.992971  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:21.205019  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:21.333794  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:21.493671  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:21.493843  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:21.704940  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:21.833665  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:21.943461  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:21.992096  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:21.992353  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:22.205410  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:22.334188  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:22.493483  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:22.493571  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:22.708387  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:22.834123  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:22.993099  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:22.993226  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:23.205128  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:23.334011  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:23.492916  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:23.494083  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:23.708465  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:23.834260  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:23.943998  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:23.992418  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:23.992677  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:24.204741  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:24.334516  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:24.493082  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:24.493268  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:24.706892  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:24.833515  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:24.992668  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:24.993020  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:25.204674  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:25.334405  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:25.493079  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:25.494014  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:25.706972  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:25.833865  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:25.991329  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:25.992439  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:26.205162  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:26.334223  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:26.442885  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:26.492787  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:26.493036  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:26.705333  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:26.834359  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:26.992661  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:26.992827  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:27.205424  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:27.334166  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:27.492584  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:27.492883  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:27.741152  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:27.848476  517398 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:34:27.848501  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:27.982881  517398 node_ready.go:49] node "addons-612806" is "Ready"
	I1115 09:34:27.982911  517398 node_ready.go:38] duration metric: took 42.042689843s for node "addons-612806" to be "Ready" ...
	I1115 09:34:27.982926  517398 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:34:27.982986  517398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:34:28.007354  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:28.012492  517398 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:34:28.012520  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:28.025169  517398 api_server.go:72] duration metric: took 43.743579173s to wait for apiserver process to appear ...
	I1115 09:34:28.025244  517398 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:34:28.025280  517398 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:34:28.062050  517398 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:34:28.067986  517398 api_server.go:141] control plane version: v1.34.1
	I1115 09:34:28.068017  517398 api_server.go:131] duration metric: took 42.751292ms to wait for apiserver health ...
	I1115 09:34:28.068027  517398 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:34:28.125681  517398 system_pods.go:59] 19 kube-system pods found
	I1115 09:34:28.125731  517398 system_pods.go:61] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:34:28.125751  517398 system_pods.go:61] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:28.125769  517398 system_pods.go:61] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:28.125775  517398 system_pods.go:61] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending
	I1115 09:34:28.125781  517398 system_pods.go:61] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:28.125798  517398 system_pods.go:61] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:28.125803  517398 system_pods.go:61] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:28.125808  517398 system_pods.go:61] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:28.125822  517398 system_pods.go:61] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:28.125833  517398 system_pods.go:61] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:28.125842  517398 system_pods.go:61] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:28.125851  517398 system_pods.go:61] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:28.125867  517398 system_pods.go:61] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending
	I1115 09:34:28.125881  517398 system_pods.go:61] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:28.125897  517398 system_pods.go:61] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:28.125903  517398 system_pods.go:61] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending
	I1115 09:34:28.125913  517398 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending
	I1115 09:34:28.125918  517398 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending
	I1115 09:34:28.125942  517398 system_pods.go:61] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:34:28.125952  517398 system_pods.go:74] duration metric: took 57.918752ms to wait for pod list to return data ...
	I1115 09:34:28.125967  517398 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:34:28.196634  517398 default_sa.go:45] found service account: "default"
	I1115 09:34:28.196669  517398 default_sa.go:55] duration metric: took 70.691428ms for default service account to be created ...
	I1115 09:34:28.196691  517398 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:34:28.218861  517398 system_pods.go:86] 19 kube-system pods found
	I1115 09:34:28.218905  517398 system_pods.go:89] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:34:28.218915  517398 system_pods.go:89] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:28.218924  517398 system_pods.go:89] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:28.218930  517398 system_pods.go:89] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending
	I1115 09:34:28.218937  517398 system_pods.go:89] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:28.218947  517398 system_pods.go:89] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:28.218952  517398 system_pods.go:89] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:28.218962  517398 system_pods.go:89] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:28.218970  517398 system_pods.go:89] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:28.218980  517398 system_pods.go:89] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:28.218985  517398 system_pods.go:89] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:28.218991  517398 system_pods.go:89] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:28.219002  517398 system_pods.go:89] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:34:28.219009  517398 system_pods.go:89] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:28.219022  517398 system_pods.go:89] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:28.219026  517398 system_pods.go:89] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending
	I1115 09:34:28.219031  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending
	I1115 09:34:28.219035  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending
	I1115 09:34:28.219040  517398 system_pods.go:89] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:34:28.219067  517398 retry.go:31] will retry after 211.513842ms: missing components: kube-dns
	I1115 09:34:28.219217  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:28.340710  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:28.440696  517398 system_pods.go:86] 19 kube-system pods found
	I1115 09:34:28.440736  517398 system_pods.go:89] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:34:28.440745  517398 system_pods.go:89] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:28.440752  517398 system_pods.go:89] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:28.440759  517398 system_pods.go:89] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:34:28.440764  517398 system_pods.go:89] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:28.440769  517398 system_pods.go:89] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:28.440773  517398 system_pods.go:89] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:28.440789  517398 system_pods.go:89] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:28.440796  517398 system_pods.go:89] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:28.440809  517398 system_pods.go:89] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:28.440815  517398 system_pods.go:89] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:28.440827  517398 system_pods.go:89] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:28.440835  517398 system_pods.go:89] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:34:28.440849  517398 system_pods.go:89] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:28.440856  517398 system_pods.go:89] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:28.440862  517398 system_pods.go:89] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:34:28.440872  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:28.440882  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:28.440889  517398 system_pods.go:89] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:34:28.440910  517398 retry.go:31] will retry after 318.709172ms: missing components: kube-dns
	I1115 09:34:28.494414  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:28.494487  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:28.710106  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:28.812868  517398 system_pods.go:86] 19 kube-system pods found
	I1115 09:34:28.812907  517398 system_pods.go:89] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:34:28.812916  517398 system_pods.go:89] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:28.812923  517398 system_pods.go:89] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:28.812933  517398 system_pods.go:89] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:34:28.812937  517398 system_pods.go:89] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:28.812943  517398 system_pods.go:89] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:28.812947  517398 system_pods.go:89] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:28.812953  517398 system_pods.go:89] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:28.812958  517398 system_pods.go:89] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:28.812962  517398 system_pods.go:89] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:28.812975  517398 system_pods.go:89] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:28.812981  517398 system_pods.go:89] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:28.812989  517398 system_pods.go:89] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:34:28.813000  517398 system_pods.go:89] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:28.813007  517398 system_pods.go:89] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:28.813016  517398 system_pods.go:89] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:34:28.813022  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:28.813033  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:28.813039  517398 system_pods.go:89] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:34:28.813053  517398 retry.go:31] will retry after 390.480865ms: missing components: kube-dns
	I1115 09:34:28.840895  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:28.994505  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:28.994817  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:29.205701  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:29.211005  517398 system_pods.go:86] 19 kube-system pods found
	I1115 09:34:29.211038  517398 system_pods.go:89] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Running
	I1115 09:34:29.211049  517398 system_pods.go:89] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:29.211056  517398 system_pods.go:89] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:29.211065  517398 system_pods.go:89] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:34:29.211069  517398 system_pods.go:89] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:29.211074  517398 system_pods.go:89] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:29.211079  517398 system_pods.go:89] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:29.211083  517398 system_pods.go:89] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:29.211099  517398 system_pods.go:89] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:29.211107  517398 system_pods.go:89] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:29.211112  517398 system_pods.go:89] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:29.211118  517398 system_pods.go:89] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:29.211132  517398 system_pods.go:89] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:34:29.211139  517398 system_pods.go:89] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:29.211145  517398 system_pods.go:89] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:29.211155  517398 system_pods.go:89] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:34:29.211162  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:29.211169  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:29.211175  517398 system_pods.go:89] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Running
	I1115 09:34:29.211185  517398 system_pods.go:126] duration metric: took 1.014487926s to wait for k8s-apps to be running ...
	I1115 09:34:29.211197  517398 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:34:29.211256  517398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:34:29.228404  517398 system_svc.go:56] duration metric: took 17.197087ms WaitForService to wait for kubelet
	I1115 09:34:29.228435  517398 kubeadm.go:587] duration metric: took 44.946850914s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:34:29.228454  517398 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:34:29.231256  517398 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 09:34:29.231287  517398 node_conditions.go:123] node cpu capacity is 2
	I1115 09:34:29.231301  517398 node_conditions.go:105] duration metric: took 2.842074ms to run NodePressure ...
	I1115 09:34:29.231315  517398 start.go:242] waiting for startup goroutines ...
	I1115 09:34:29.334837  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:29.493638  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:29.494068  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:29.705102  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:29.835680  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:29.992166  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:29.992693  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:30.204638  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:30.334090  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:30.493485  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:30.493827  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:30.704595  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:30.833664  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:30.991885  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:30.993310  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:31.205298  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:31.334366  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:31.493401  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:31.494136  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:31.705509  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:31.835174  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:31.992818  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:31.993034  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:32.204810  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:32.334277  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:32.495237  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:32.495449  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:32.706561  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:32.834630  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:32.994250  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:32.994759  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:33.211390  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:33.335373  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:33.493633  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:33.494868  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:33.705120  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:33.834692  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:33.993956  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:33.994495  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:34.205708  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:34.333813  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:34.492894  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:34.493478  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:34.707375  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:34.834924  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:34.994033  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:34.994722  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:35.206198  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:35.334724  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:35.492040  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:35.494209  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:35.706580  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:35.834812  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:35.994830  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:35.995505  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:36.205746  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:36.333733  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:36.495572  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:36.495750  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:36.706705  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:36.834144  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:37.004181  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:37.004362  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:37.205683  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:37.334109  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:37.495909  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:37.496487  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:37.706314  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:37.835318  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:37.994545  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:37.994784  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:38.204727  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:38.333993  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:38.494047  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:38.494720  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:38.705969  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:38.834853  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:38.994002  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:38.995289  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:39.205278  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:39.335860  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:39.495803  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:39.496278  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:39.707421  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:39.835277  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:39.994297  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:39.994769  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:40.205526  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:40.335575  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:40.496315  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:40.496691  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:40.709673  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:40.837157  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:40.993873  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:40.994300  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:41.205547  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:41.335322  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:41.493234  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:41.494203  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:41.709985  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:41.834420  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:41.992952  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:41.993490  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:42.206377  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:42.335162  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:42.493636  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:42.494032  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:42.705129  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:42.834800  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:42.994493  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:42.994939  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:43.206097  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:43.335717  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:43.492648  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:43.493048  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:43.706756  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:43.835374  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:44.032581  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:44.032760  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:44.205464  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:44.336583  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:44.494980  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:44.506888  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:44.712309  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:44.834983  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:44.993795  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:44.993941  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:45.211871  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:45.336420  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:45.498708  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:45.499993  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:45.711945  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:45.834948  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:45.995891  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:45.996550  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:46.206906  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:46.334976  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:46.498524  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:46.501228  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:46.707753  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:46.833958  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:47.000178  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:47.001463  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:47.205841  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:47.334464  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:47.491410  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:47.493165  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:47.705317  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:47.834514  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:47.993924  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:47.994507  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:48.205401  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:48.334649  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:48.495186  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:48.495362  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:48.706572  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:48.834149  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:48.999681  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:49.000506  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:49.205555  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:49.335798  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:49.492784  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:49.492891  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:49.705631  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:49.834494  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:49.992201  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:49.992309  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:50.205911  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:50.334191  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:50.494072  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:50.494438  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:50.706026  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:50.833941  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:50.998083  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:51.004360  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:51.205636  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:51.335153  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:51.493724  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:51.493832  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:51.704952  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:51.833980  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:51.999974  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:52.004000  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:52.205439  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:52.335054  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:52.492540  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:52.492873  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:52.706481  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:52.835177  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:52.992321  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:52.993006  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:53.204898  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:53.334488  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:53.493084  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:53.493445  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:53.705573  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:53.834083  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:53.993143  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:53.994219  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:54.206766  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:54.334569  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:54.493744  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:54.493874  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:54.705483  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:54.834668  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:54.999410  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:54.999822  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:55.205256  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:55.335107  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:55.494231  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:55.494730  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:55.706263  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:55.835519  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:55.994191  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:55.996531  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:56.205969  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:56.334633  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:56.492521  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:56.492914  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:56.709116  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:56.834419  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:56.993215  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:56.993312  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:57.205541  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:57.334488  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:57.491827  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:57.492666  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:57.706546  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:57.835192  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:57.994596  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:57.994810  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:58.205751  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:58.334769  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:58.493699  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:58.494195  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:58.707422  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:58.838141  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:58.993033  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:58.993288  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:59.205535  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:59.335146  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:59.492838  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:59.493183  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:59.709451  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:59.834806  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:59.994947  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:59.995376  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:00.210669  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:00.337631  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:00.497901  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:00.499383  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:00.711030  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:00.834338  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:00.993299  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:00.993547  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:01.209263  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:01.336349  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:01.498083  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:01.498911  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:01.705664  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:01.835303  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:01.992921  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:01.994275  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:02.206446  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:02.335443  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:02.493336  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:02.494106  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:02.706836  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:02.834381  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:02.992606  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:02.993969  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:03.205241  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:03.334394  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:03.506859  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:03.507212  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:03.705539  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:03.834523  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:03.991955  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:03.993621  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:04.206617  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:04.334849  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:04.491795  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:04.493486  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:04.706348  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:04.834476  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:04.991601  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:04.992222  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:05.205864  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:05.334280  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:05.492767  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:05.492951  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:05.707348  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:05.834339  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:05.993183  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:05.993345  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:06.205472  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:06.335188  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:06.494301  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:06.494732  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:06.708316  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:06.835494  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:06.993201  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:06.993911  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:07.205736  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:07.334672  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:07.491656  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:07.493022  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:07.706639  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:07.836209  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:07.993912  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:07.995730  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:08.205658  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:08.334044  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:08.492840  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:08.494219  517398 kapi.go:107] duration metric: took 1m18.004733862s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 09:35:08.706420  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:08.836655  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:08.992554  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:09.205403  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:09.334593  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:09.491666  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:09.705848  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:09.834693  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:09.992931  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:10.205741  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:10.335169  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:10.496280  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:10.710156  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:10.834904  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:10.992205  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:11.204992  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:11.334022  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:11.491915  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:11.706354  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:11.835190  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:11.992474  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:12.206055  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:12.334674  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:12.493749  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:12.707827  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:12.834897  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:12.993623  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:13.207135  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:13.344291  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:13.497010  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:13.707636  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:13.834905  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:13.993533  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:14.205932  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:14.335175  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:14.492119  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:14.709297  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:14.836791  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:14.992250  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:15.206780  517398 kapi.go:107] duration metric: took 1m21.00491442s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 09:35:15.211166  517398 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-612806 cluster.
	I1115 09:35:15.214153  517398 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 09:35:15.217123  517398 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 09:35:15.335063  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:15.492757  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:15.835328  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:15.992755  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:16.335917  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:16.492964  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:16.834416  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:16.991287  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:17.334335  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:17.492067  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:17.834662  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:17.991771  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:18.334198  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:18.492198  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:18.834876  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:18.992878  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:19.334283  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:19.492375  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:19.835022  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:19.992531  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:20.333937  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:20.492510  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:20.835644  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:20.993249  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:21.338622  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:21.491968  517398 kapi.go:107] duration metric: took 1m31.003357411s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 09:35:21.835640  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:22.334687  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:22.834877  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:23.334717  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:23.834407  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:24.334285  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:24.836646  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:25.335550  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:25.834248  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:26.336925  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:26.835593  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:27.334464  517398 kapi.go:107] duration metric: took 1m36.503778421s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 09:35:27.337668  517398 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, inspektor-gadget, amd-gpu-device-plugin, registry-creds, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1115 09:35:27.340743  517398 addons.go:515] duration metric: took 1m43.058771228s for enable addons: enabled=[nvidia-device-plugin storage-provisioner inspektor-gadget amd-gpu-device-plugin registry-creds ingress-dns cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1115 09:35:27.340801  517398 start.go:247] waiting for cluster config update ...
	I1115 09:35:27.340824  517398 start.go:256] writing updated cluster config ...
	I1115 09:35:27.341135  517398 ssh_runner.go:195] Run: rm -f paused
	I1115 09:35:27.346700  517398 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:35:27.350686  517398 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-msbpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.356284  517398 pod_ready.go:94] pod "coredns-66bc5c9577-msbpd" is "Ready"
	I1115 09:35:27.356309  517398 pod_ready.go:86] duration metric: took 5.593016ms for pod "coredns-66bc5c9577-msbpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.359516  517398 pod_ready.go:83] waiting for pod "etcd-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.366188  517398 pod_ready.go:94] pod "etcd-addons-612806" is "Ready"
	I1115 09:35:27.366279  517398 pod_ready.go:86] duration metric: took 6.736375ms for pod "etcd-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.371191  517398 pod_ready.go:83] waiting for pod "kube-apiserver-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.376857  517398 pod_ready.go:94] pod "kube-apiserver-addons-612806" is "Ready"
	I1115 09:35:27.376881  517398 pod_ready.go:86] duration metric: took 5.656186ms for pod "kube-apiserver-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.380155  517398 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.753188  517398 pod_ready.go:94] pod "kube-controller-manager-addons-612806" is "Ready"
	I1115 09:35:27.753215  517398 pod_ready.go:86] duration metric: took 372.98681ms for pod "kube-controller-manager-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.951172  517398 pod_ready.go:83] waiting for pod "kube-proxy-7s8kz" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:28.360801  517398 pod_ready.go:94] pod "kube-proxy-7s8kz" is "Ready"
	I1115 09:35:28.360880  517398 pod_ready.go:86] duration metric: took 409.630724ms for pod "kube-proxy-7s8kz" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:28.551574  517398 pod_ready.go:83] waiting for pod "kube-scheduler-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:28.950277  517398 pod_ready.go:94] pod "kube-scheduler-addons-612806" is "Ready"
	I1115 09:35:28.950320  517398 pod_ready.go:86] duration metric: took 398.717422ms for pod "kube-scheduler-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:28.950338  517398 pod_ready.go:40] duration metric: took 1.603607974s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:35:29.020410  517398 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 09:35:29.023514  517398 out.go:179] * Done! kubectl is now configured to use "addons-612806" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:38:37 addons-612806 crio[827]: time="2025-11-15T09:38:37.988696247Z" level=info msg="Running pod sandbox: kube-system/registry-creds-764b6fb674-kpz66/POD" id=e53ed376-7e5c-4f82-8b0b-17b94c1e1130 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:38:37 addons-612806 crio[827]: time="2025-11-15T09:38:37.988765488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:38:37 addons-612806 crio[827]: time="2025-11-15T09:38:37.996087054Z" level=info msg="Got pod network &{Name:registry-creds-764b6fb674-kpz66 Namespace:kube-system ID:7fe1b0c4e66b992217d71755a9c3311c1122cf81f76ffd4c4f62429b8ddbfc0e UID:49c3bf34-3e32-4a3e-b71c-db316210e43a NetNS:/var/run/netns/cb201a95-5d2d-47fe-b872-879f89a0da2f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b25070}] Aliases:map[]}"
	Nov 15 09:38:37 addons-612806 crio[827]: time="2025-11-15T09:38:37.996124148Z" level=info msg="Adding pod kube-system_registry-creds-764b6fb674-kpz66 to CNI network \"kindnet\" (type=ptp)"
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.006445577Z" level=info msg="Got pod network &{Name:registry-creds-764b6fb674-kpz66 Namespace:kube-system ID:7fe1b0c4e66b992217d71755a9c3311c1122cf81f76ffd4c4f62429b8ddbfc0e UID:49c3bf34-3e32-4a3e-b71c-db316210e43a NetNS:/var/run/netns/cb201a95-5d2d-47fe-b872-879f89a0da2f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b25070}] Aliases:map[]}"
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.00661564Z" level=info msg="Checking pod kube-system_registry-creds-764b6fb674-kpz66 for CNI network kindnet (type=ptp)"
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.009929265Z" level=info msg="Ran pod sandbox 7fe1b0c4e66b992217d71755a9c3311c1122cf81f76ffd4c4f62429b8ddbfc0e with infra container: kube-system/registry-creds-764b6fb674-kpz66/POD" id=e53ed376-7e5c-4f82-8b0b-17b94c1e1130 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.021889423Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=2caafa2f-ef62-4dde-a53e-3a407b669b97 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.022549498Z" level=info msg="Image docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 not found" id=2caafa2f-ef62-4dde-a53e-3a407b669b97 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.022724246Z" level=info msg="Neither image nor artfiact docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 found" id=2caafa2f-ef62-4dde-a53e-3a407b669b97 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.420691884Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=0bd94f0f-ef1b-49b1-8851-3a25a38cf05f name=/runtime.v1.ImageService/PullImage
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.421243096Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3bd64f8e-390f-45f4-a8e6-b51bdf659c81 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.423210702Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=ce5b824b-2003-41ab-bf04-ea93d245da47 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.423966241Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e8eb3e9c-5000-4725-83b5-d55315994d92 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.425173722Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.431843647Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-7xttd/hello-world-app" id=03d2421f-bfca-46e5-833a-50c7710d1663 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.431969576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.439109165Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.439316216Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/42e52af80119663ed32b575fdfd9f4b867be2a92536987e408568bb698e18c76/merged/etc/passwd: no such file or directory"
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.439347558Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/42e52af80119663ed32b575fdfd9f4b867be2a92536987e408568bb698e18c76/merged/etc/group: no such file or directory"
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.439613659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.460955754Z" level=info msg="Created container 37c43a59bb436e5ef03c3f9c0610fbf9a5088d0bde95b8c5255ce37a1d6b23e8: default/hello-world-app-5d498dc89-7xttd/hello-world-app" id=03d2421f-bfca-46e5-833a-50c7710d1663 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.463731738Z" level=info msg="Starting container: 37c43a59bb436e5ef03c3f9c0610fbf9a5088d0bde95b8c5255ce37a1d6b23e8" id=a90fc484-bb29-4d84-80a5-380be3388e5e name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.471082292Z" level=info msg="Started container" PID=7092 containerID=37c43a59bb436e5ef03c3f9c0610fbf9a5088d0bde95b8c5255ce37a1d6b23e8 description=default/hello-world-app-5d498dc89-7xttd/hello-world-app id=a90fc484-bb29-4d84-80a5-380be3388e5e name=/runtime.v1.RuntimeService/StartContainer sandboxID=43f6574712777882eaf75278ea139ddcd6889638751b7d0c011fbc0a9938f9bf
	Nov 15 09:38:38 addons-612806 crio[827]: time="2025-11-15T09:38:38.625214839Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	37c43a59bb436       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   43f6574712777       hello-world-app-5d498dc89-7xttd            default
	971f143cdeed6       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago       Running             nginx                                    0                   89f0053704b30       nginx                                      default
	d567eb19bf636       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago       Running             busybox                                  0                   37093d9b72a30       busybox                                    default
	f5d0536bcdade       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                   kube-system
	2760bb90e56da       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                   kube-system
	33e20be214f16       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                   kube-system
	746d8de6dedd7       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                   kube-system
	7bea5772f4a36       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                   kube-system
	4a102da0b6e13       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago       Running             controller                               0                   0b71193a9d24f       ingress-nginx-controller-6c8bf45fb-lzt9c   ingress-nginx
	f3a3eb6514f75       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   5734b0f3c3bc2       gcp-auth-78565c9fb4-tts8c                  gcp-auth
	e90cb6f34fb09       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago       Running             gadget                                   0                   269266278ffad       gadget-7hxzc                               gadget
	b5e54ce202660       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   388e1150d04d1       registry-proxy-fbtjr                       kube-system
	2aa4139d8dd55       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago       Running             csi-external-health-monitor-controller   0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                   kube-system
	12a854b5199da       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   9bab4960d5518       snapshot-controller-7d9fbc56b8-7w2kr       kube-system
	bf2b5a6db5940       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   e57a7eb1c5871       snapshot-controller-7d9fbc56b8-d9nz8       kube-system
	075d53e5906ff       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ee78fd2107c21       nvidia-device-plugin-daemonset-b6hwh       kube-system
	fb3b6d33a4c26       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago       Exited              patch                                    0                   54fb97f4d207a       ingress-nginx-admission-patch-4m8hk        ingress-nginx
	c48d417c7a19b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago       Exited              create                                   0                   d87bcaf53f7d7       ingress-nginx-admission-create-8zwkg       ingress-nginx
	7cc41ee7d29ef       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago       Running             yakd                                     0                   cfafc98934096       yakd-dashboard-5ff678cb9-b4gnf             yakd-dashboard
	b85dde7237c9e       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   7cf2e7d72de8b       csi-hostpath-attacher-0                    kube-system
	e99ded4840b6b       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago       Running             local-path-provisioner                   0                   40eb08b0121b1       local-path-provisioner-648f6765c9-qfb28    local-path-storage
	0f3e60922b612       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago       Running             csi-resizer                              0                   d10ed202800a5       csi-hostpath-resizer-0                     kube-system
	d8ad2af91929f       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago       Running             registry                                 0                   966dce1df0a38       registry-6b586f9694-79xjl                  kube-system
	44f41ef9e3625       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago       Running             metrics-server                           0                   34c34db92d77e       metrics-server-85b7d694d7-4pwlq            kube-system
	393d306bdd197       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago       Running             cloud-spanner-emulator                   0                   877dddd6970de       cloud-spanner-emulator-6f9fcf858b-lxjc6    default
	0c821a004b527       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago       Running             minikube-ingress-dns                     0                   2fdf5b005182c       kube-ingress-dns-minikube                  kube-system
	a25068fa2e690       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   f31c0ff32faf4       coredns-66bc5c9577-msbpd                   kube-system
	2a3c8692022a2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   09fd40706fba5       storage-provisioner                        kube-system
	38ee32437965d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago       Running             kube-proxy                               0                   33ad05db95f06       kube-proxy-7s8kz                           kube-system
	19fe4bfa7943a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago       Running             kindnet-cni                              0                   f0c2c07776e94       kindnet-gpq7q                              kube-system
	b546f11eac5f3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago       Running             etcd                                     0                   3bd6199efe93d       etcd-addons-612806                         kube-system
	2d41c4d4be99c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago       Running             kube-apiserver                           0                   e7c818d44ddd3       kube-apiserver-addons-612806               kube-system
	a834825c233e4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago       Running             kube-scheduler                           0                   b61b38542c8f6       kube-scheduler-addons-612806               kube-system
	dc26ca1097619       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago       Running             kube-controller-manager                  0                   3ae9b4c208cca       kube-controller-manager-addons-612806      kube-system
	
	
	==> coredns [a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7] <==
	[INFO] 10.244.0.17:50602 - 11237 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002269758s
	[INFO] 10.244.0.17:50602 - 57848 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000127086s
	[INFO] 10.244.0.17:50602 - 18849 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00015312s
	[INFO] 10.244.0.17:51138 - 37665 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000144816s
	[INFO] 10.244.0.17:51138 - 37460 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000371271s
	[INFO] 10.244.0.17:46047 - 1362 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110913s
	[INFO] 10.244.0.17:46047 - 1140 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151536s
	[INFO] 10.244.0.17:47472 - 40876 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114861s
	[INFO] 10.244.0.17:47472 - 40662 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000163974s
	[INFO] 10.244.0.17:42011 - 64665 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001270748s
	[INFO] 10.244.0.17:42011 - 64452 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001321808s
	[INFO] 10.244.0.17:56255 - 43742 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000137292s
	[INFO] 10.244.0.17:56255 - 43306 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000236219s
	[INFO] 10.244.0.20:52067 - 21128 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189467s
	[INFO] 10.244.0.20:46385 - 55053 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000134108s
	[INFO] 10.244.0.20:52552 - 48966 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130137s
	[INFO] 10.244.0.20:56220 - 20338 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000078627s
	[INFO] 10.244.0.20:34252 - 52397 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100297s
	[INFO] 10.244.0.20:39201 - 59513 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095193s
	[INFO] 10.244.0.20:49873 - 50495 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002106358s
	[INFO] 10.244.0.20:53186 - 5847 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001551716s
	[INFO] 10.244.0.20:44440 - 11221 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004413564s
	[INFO] 10.244.0.20:59553 - 12990 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.005310153s
	[INFO] 10.244.0.23:58542 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158846s
	[INFO] 10.244.0.23:42502 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164147s
	
	
	==> describe nodes <==
	Name:               addons-612806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-612806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=addons-612806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_33_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-612806
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-612806"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:33:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-612806
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:38:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:38:23 +0000   Sat, 15 Nov 2025 09:33:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:38:23 +0000   Sat, 15 Nov 2025 09:33:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:38:23 +0000   Sat, 15 Nov 2025 09:33:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:38:23 +0000   Sat, 15 Nov 2025 09:34:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-612806
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c94e744c-8f53-4209-88d4-00cf31bc37c0
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  default                     cloud-spanner-emulator-6f9fcf858b-lxjc6     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     hello-world-app-5d498dc89-7xttd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-7hxzc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  gcp-auth                    gcp-auth-78565c9fb4-tts8c                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-lzt9c    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m50s
	  kube-system                 coredns-66bc5c9577-msbpd                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m56s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpathplugin-bcrc9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 etcd-addons-612806                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m2s
	  kube-system                 kindnet-gpq7q                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m56s
	  kube-system                 kube-apiserver-addons-612806                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-controller-manager-addons-612806       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-7s8kz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-addons-612806                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 metrics-server-85b7d694d7-4pwlq             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m51s
	  kube-system                 nvidia-device-plugin-daemonset-b6hwh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 registry-6b586f9694-79xjl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-creds-764b6fb674-kpz66             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 registry-proxy-fbtjr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 snapshot-controller-7d9fbc56b8-7w2kr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-d9nz8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  local-path-storage          local-path-provisioner-648f6765c9-qfb28     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-b4gnf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m52s  kube-proxy       
	  Normal   Starting                 5m2s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m2s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m2s   kubelet          Node addons-612806 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m2s   kubelet          Node addons-612806 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m2s   kubelet          Node addons-612806 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m57s  node-controller  Node addons-612806 event: Registered Node addons-612806 in Controller
	  Normal   NodeReady                4m13s  kubelet          Node addons-612806 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 09:10] overlayfs: idmapped layers are currently not supported
	[Nov15 09:12] overlayfs: idmapped layers are currently not supported
	[Nov15 09:14] overlayfs: idmapped layers are currently not supported
	[ +52.677127] overlayfs: idmapped layers are currently not supported
	[Nov15 09:15] overlayfs: idmapped layers are currently not supported
	[ +18.264224] overlayfs: idmapped layers are currently not supported
	[Nov15 09:16] overlayfs: idmapped layers are currently not supported
	[Nov15 09:17] overlayfs: idmapped layers are currently not supported
	[Nov15 09:19] overlayfs: idmapped layers are currently not supported
	[ +25.565300] overlayfs: idmapped layers are currently not supported
	[Nov15 09:20] overlayfs: idmapped layers are currently not supported
	[Nov15 09:21] overlayfs: idmapped layers are currently not supported
	[Nov15 09:22] overlayfs: idmapped layers are currently not supported
	[ +46.757118] overlayfs: idmapped layers are currently not supported
	[Nov15 09:23] overlayfs: idmapped layers are currently not supported
	[ +24.765155] overlayfs: idmapped layers are currently not supported
	[Nov15 09:24] overlayfs: idmapped layers are currently not supported
	[Nov15 09:25] overlayfs: idmapped layers are currently not supported
	[Nov15 09:26] overlayfs: idmapped layers are currently not supported
	[Nov15 09:27] overlayfs: idmapped layers are currently not supported
	[ +25.160027] overlayfs: idmapped layers are currently not supported
	[Nov15 09:29] overlayfs: idmapped layers are currently not supported
	[ +40.626123] overlayfs: idmapped layers are currently not supported
	[Nov15 09:32] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 09:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314] <==
	{"level":"warn","ts":"2025-11-15T09:33:34.730100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.758221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.772693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.802485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.826098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.860883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.878414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.891155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.911978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.926609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.946107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.958328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.973001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.002643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.016406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.058804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.078231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.090854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.194642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:51.263101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:51.272743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:34:13.384652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:34:13.410136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:34:13.438122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:34:13.452697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41638","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f3a3eb6514f75e5a41f1d79a48e883d63397233ca2e4078d9d8eaffafca420f4] <==
	2025/11/15 09:35:14 GCP Auth Webhook started!
	2025/11/15 09:35:29 Ready to marshal response ...
	2025/11/15 09:35:29 Ready to write response ...
	2025/11/15 09:35:29 Ready to marshal response ...
	2025/11/15 09:35:29 Ready to write response ...
	2025/11/15 09:35:29 Ready to marshal response ...
	2025/11/15 09:35:29 Ready to write response ...
	2025/11/15 09:35:49 Ready to marshal response ...
	2025/11/15 09:35:49 Ready to write response ...
	2025/11/15 09:35:52 Ready to marshal response ...
	2025/11/15 09:35:52 Ready to write response ...
	2025/11/15 09:35:52 Ready to marshal response ...
	2025/11/15 09:35:52 Ready to write response ...
	2025/11/15 09:36:00 Ready to marshal response ...
	2025/11/15 09:36:00 Ready to write response ...
	2025/11/15 09:36:16 Ready to marshal response ...
	2025/11/15 09:36:16 Ready to write response ...
	2025/11/15 09:36:17 Ready to marshal response ...
	2025/11/15 09:36:17 Ready to write response ...
	2025/11/15 09:36:40 Ready to marshal response ...
	2025/11/15 09:36:40 Ready to write response ...
	2025/11/15 09:38:37 Ready to marshal response ...
	2025/11/15 09:38:37 Ready to write response ...
	
	
	==> kernel <==
	 09:38:40 up  4:21,  0 user,  load average: 0.23, 1.53, 2.39
	Linux addons-612806 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d] <==
	I1115 09:36:37.055871       1 main.go:301] handling current node
	I1115 09:36:47.057374       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:36:47.057476       1 main.go:301] handling current node
	I1115 09:36:57.056216       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:36:57.056251       1 main.go:301] handling current node
	I1115 09:37:07.064168       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:07.064273       1 main.go:301] handling current node
	I1115 09:37:17.063365       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:17.063399       1 main.go:301] handling current node
	I1115 09:37:27.062201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:27.062312       1 main.go:301] handling current node
	I1115 09:37:37.056003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:37.056125       1 main.go:301] handling current node
	I1115 09:37:47.062929       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:47.063036       1 main.go:301] handling current node
	I1115 09:37:57.061717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:57.061826       1 main.go:301] handling current node
	I1115 09:38:07.055781       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:38:07.055889       1 main.go:301] handling current node
	I1115 09:38:17.056687       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:38:17.056726       1 main.go:301] handling current node
	I1115 09:38:27.054822       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:38:27.054859       1 main.go:301] handling current node
	I1115 09:38:37.054735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:38:37.054766       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658] <==
	W1115 09:34:49.862979       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:34:49.863027       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1115 09:34:49.863039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1115 09:34:49.864170       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:34:49.864253       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1115 09:34:49.864266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1115 09:34:54.990264       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.32.193:443: connect: connection refused" logger="UnhandledError"
	W1115 09:34:54.990726       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:34:54.990803       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 09:34:54.991601       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.32.193:443: connect: connection refused" logger="UnhandledError"
	E1115 09:34:54.996828       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.32.193:443: connect: connection refused" logger="UnhandledError"
	I1115 09:34:55.125254       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1115 09:35:39.255794       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34566: use of closed network connection
	E1115 09:35:39.410950       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34592: use of closed network connection
	I1115 09:36:16.181329       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1115 09:36:16.456683       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.203.171"}
	I1115 09:36:28.898358       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1115 09:36:30.659268       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1115 09:38:37.662468       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.129.40"}
	
	
	==> kube-controller-manager [dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9] <==
	I1115 09:33:43.437339       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 09:33:43.437395       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-612806"
	I1115 09:33:43.437431       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 09:33:43.440573       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 09:33:43.440616       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 09:33:43.440697       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 09:33:43.441073       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 09:33:43.441565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:33:43.441702       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 09:33:43.442597       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 09:33:43.452142       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 09:33:43.455463       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 09:33:43.455477       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1115 09:33:49.042160       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1115 09:33:49.072438       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1115 09:34:13.377726       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:34:13.377869       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1115 09:34:13.377930       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 09:34:13.426235       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1115 09:34:13.431378       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 09:34:13.478214       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:34:13.531792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:34:28.443243       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1115 09:34:43.486262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:34:43.544860       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507] <==
	I1115 09:33:46.879698       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:33:46.971285       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:33:47.153809       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:33:47.153834       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:33:47.153910       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:33:47.333275       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:33:47.334479       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:33:47.340903       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:33:47.341209       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:33:47.341223       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:33:47.345189       1 config.go:200] "Starting service config controller"
	I1115 09:33:47.356460       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:33:47.346362       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:33:47.361398       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:33:47.347124       1 config.go:309] "Starting node config controller"
	I1115 09:33:47.361886       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:33:47.364097       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:33:47.346342       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:33:47.364118       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:33:47.364140       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:33:47.457993       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:33:47.462205       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f] <==
	I1115 09:33:36.434961       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:33:36.438437       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:33:36.438547       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 09:33:36.461945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 09:33:36.470464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:33:36.470716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:33:36.470819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:33:36.470906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:33:36.471015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:33:36.471112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:33:36.471196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:33:36.471329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:33:36.471420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:33:36.471494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:33:36.471561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:33:36.471636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:33:36.471712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:33:36.471815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:33:36.471901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:33:36.471999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:33:36.472043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:33:36.472089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:33:37.301240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:33:37.414295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1115 09:33:37.938752       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.556883    1265 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ba01f188-6a5d-4612-8749-c494c9644072-gcp-creds\") pod \"ba01f188-6a5d-4612-8749-c494c9644072\" (UID: \"ba01f188-6a5d-4612-8749-c494c9644072\") "
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.556970    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba01f188-6a5d-4612-8749-c494c9644072-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ba01f188-6a5d-4612-8749-c494c9644072" (UID: "ba01f188-6a5d-4612-8749-c494c9644072"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.557427    1265 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lj4d\" (UniqueName: \"kubernetes.io/projected/ba01f188-6a5d-4612-8749-c494c9644072-kube-api-access-5lj4d\") pod \"ba01f188-6a5d-4612-8749-c494c9644072\" (UID: \"ba01f188-6a5d-4612-8749-c494c9644072\") "
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.557716    1265 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9657a3f4-c206-11f0-8c1a-723234965fbf\") pod \"ba01f188-6a5d-4612-8749-c494c9644072\" (UID: \"ba01f188-6a5d-4612-8749-c494c9644072\") "
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.557911    1265 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ba01f188-6a5d-4612-8749-c494c9644072-gcp-creds\") on node \"addons-612806\" DevicePath \"\""
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.562658    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba01f188-6a5d-4612-8749-c494c9644072-kube-api-access-5lj4d" (OuterVolumeSpecName: "kube-api-access-5lj4d") pod "ba01f188-6a5d-4612-8749-c494c9644072" (UID: "ba01f188-6a5d-4612-8749-c494c9644072"). InnerVolumeSpecName "kube-api-access-5lj4d". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.569875    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^9657a3f4-c206-11f0-8c1a-723234965fbf" (OuterVolumeSpecName: "task-pv-storage") pod "ba01f188-6a5d-4612-8749-c494c9644072" (UID: "ba01f188-6a5d-4612-8749-c494c9644072"). InnerVolumeSpecName "pvc-e2dd0f11-7156-4b08-baac-194bb287ac58". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.658698    1265 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lj4d\" (UniqueName: \"kubernetes.io/projected/ba01f188-6a5d-4612-8749-c494c9644072-kube-api-access-5lj4d\") on node \"addons-612806\" DevicePath \"\""
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.658756    1265 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-e2dd0f11-7156-4b08-baac-194bb287ac58\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9657a3f4-c206-11f0-8c1a-723234965fbf\") on node \"addons-612806\" "
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.665812    1265 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-e2dd0f11-7156-4b08-baac-194bb287ac58" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^9657a3f4-c206-11f0-8c1a-723234965fbf") on node "addons-612806"
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.704598    1265 scope.go:117] "RemoveContainer" containerID="5eb9079ed1c01d49fc2b9f1895648444a39597cfec2505bd8fc0c213936223c3"
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.715646    1265 scope.go:117] "RemoveContainer" containerID="5eb9079ed1c01d49fc2b9f1895648444a39597cfec2505bd8fc0c213936223c3"
	Nov 15 09:36:48 addons-612806 kubelet[1265]: E1115 09:36:48.716012    1265 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb9079ed1c01d49fc2b9f1895648444a39597cfec2505bd8fc0c213936223c3\": container with ID starting with 5eb9079ed1c01d49fc2b9f1895648444a39597cfec2505bd8fc0c213936223c3 not found: ID does not exist" containerID="5eb9079ed1c01d49fc2b9f1895648444a39597cfec2505bd8fc0c213936223c3"
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.716195    1265 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb9079ed1c01d49fc2b9f1895648444a39597cfec2505bd8fc0c213936223c3"} err="failed to get container status \"5eb9079ed1c01d49fc2b9f1895648444a39597cfec2505bd8fc0c213936223c3\": rpc error: code = NotFound desc = could not find container \"5eb9079ed1c01d49fc2b9f1895648444a39597cfec2505bd8fc0c213936223c3\": container with ID starting with 5eb9079ed1c01d49fc2b9f1895648444a39597cfec2505bd8fc0c213936223c3 not found: ID does not exist"
	Nov 15 09:36:48 addons-612806 kubelet[1265]: I1115 09:36:48.759869    1265 reconciler_common.go:299] "Volume detached for volume \"pvc-e2dd0f11-7156-4b08-baac-194bb287ac58\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9657a3f4-c206-11f0-8c1a-723234965fbf\") on node \"addons-612806\" DevicePath \"\""
	Nov 15 09:36:50 addons-612806 kubelet[1265]: I1115 09:36:50.584641    1265 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba01f188-6a5d-4612-8749-c494c9644072" path="/var/lib/kubelet/pods/ba01f188-6a5d-4612-8749-c494c9644072/volumes"
	Nov 15 09:37:03 addons-612806 kubelet[1265]: I1115 09:37:03.582224    1265 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-79xjl" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:37:37 addons-612806 kubelet[1265]: I1115 09:37:37.581914    1265 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b6hwh" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:37:43 addons-612806 kubelet[1265]: I1115 09:37:43.582355    1265 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-fbtjr" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:38:18 addons-612806 kubelet[1265]: I1115 09:38:18.584538    1265 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-79xjl" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:38:37 addons-612806 kubelet[1265]: I1115 09:38:37.637115    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjkk\" (UniqueName: \"kubernetes.io/projected/1442aae1-d04f-4938-81bf-817b7129d85e-kube-api-access-vqjkk\") pod \"hello-world-app-5d498dc89-7xttd\" (UID: \"1442aae1-d04f-4938-81bf-817b7129d85e\") " pod="default/hello-world-app-5d498dc89-7xttd"
	Nov 15 09:38:37 addons-612806 kubelet[1265]: I1115 09:38:37.637194    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1442aae1-d04f-4938-81bf-817b7129d85e-gcp-creds\") pod \"hello-world-app-5d498dc89-7xttd\" (UID: \"1442aae1-d04f-4938-81bf-817b7129d85e\") " pod="default/hello-world-app-5d498dc89-7xttd"
	Nov 15 09:38:37 addons-612806 kubelet[1265]: W1115 09:38:37.851657    1265 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/crio-43f6574712777882eaf75278ea139ddcd6889638751b7d0c011fbc0a9938f9bf WatchSource:0}: Error finding container 43f6574712777882eaf75278ea139ddcd6889638751b7d0c011fbc0a9938f9bf: Status 404 returned error can't find the container with id 43f6574712777882eaf75278ea139ddcd6889638751b7d0c011fbc0a9938f9bf
	Nov 15 09:38:37 addons-612806 kubelet[1265]: I1115 09:38:37.983513    1265 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-kpz66" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:38:38 addons-612806 kubelet[1265]: E1115 09:38:38.740638    1265 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a9e68b57a525e1b6fc85606126db07da082cd5d6abdeccd1171a61dd39a54ff1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a9e68b57a525e1b6fc85606126db07da082cd5d6abdeccd1171a61dd39a54ff1/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939] <==
	W1115 09:38:15.580872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:17.584189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:17.588580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:19.591839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:19.596423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:21.599436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:21.603632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:23.607246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:23.613467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:25.616551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:25.620723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:27.624081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:27.630768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:29.634693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:29.641180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:31.643869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:31.648549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:33.652237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:33.656686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:35.659399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:35.667327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:37.678859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:37.685208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:39.689484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:38:39.695958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-612806 -n addons-612806
helpers_test.go:269: (dbg) Run:  kubectl --context addons-612806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-8zwkg ingress-nginx-admission-patch-4m8hk
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-612806 describe pod ingress-nginx-admission-create-8zwkg ingress-nginx-admission-patch-4m8hk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-612806 describe pod ingress-nginx-admission-create-8zwkg ingress-nginx-admission-patch-4m8hk: exit status 1 (80.867775ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8zwkg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4m8hk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-612806 describe pod ingress-nginx-admission-create-8zwkg ingress-nginx-admission-patch-4m8hk: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (265.017387ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:38:41.467248  527043 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:38:41.468051  527043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:38:41.468093  527043 out.go:374] Setting ErrFile to fd 2...
	I1115 09:38:41.468114  527043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:38:41.468398  527043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:38:41.468730  527043 mustload.go:66] Loading cluster: addons-612806
	I1115 09:38:41.469217  527043 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:38:41.469279  527043 addons.go:607] checking whether the cluster is paused
	I1115 09:38:41.469447  527043 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:38:41.469485  527043 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:38:41.470120  527043 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:38:41.489855  527043 ssh_runner.go:195] Run: systemctl --version
	I1115 09:38:41.489924  527043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:38:41.507170  527043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:38:41.612373  527043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:38:41.612541  527043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:38:41.643745  527043 cri.go:89] found id: "6f4feb0c22ee13699be5d22a1ff8c75c6ac743150667fe873578b5577b77614d"
	I1115 09:38:41.643764  527043 cri.go:89] found id: "6f42dc4f06e686509ed5b164f621a1f11f4410009e1b2e39519873ca1a3d5663"
	I1115 09:38:41.643768  527043 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:38:41.643773  527043 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:38:41.643776  527043 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:38:41.643780  527043 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:38:41.643783  527043 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:38:41.643786  527043 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:38:41.643790  527043 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:38:41.643796  527043 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:38:41.643799  527043 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:38:41.643803  527043 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:38:41.643806  527043 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:38:41.643809  527043 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:38:41.643812  527043 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:38:41.643825  527043 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:38:41.643828  527043 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:38:41.643833  527043 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:38:41.643836  527043 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:38:41.643839  527043 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:38:41.643844  527043 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:38:41.643847  527043 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:38:41.643850  527043 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:38:41.643852  527043 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:38:41.643855  527043 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:38:41.643858  527043 cri.go:89] found id: ""
	I1115 09:38:41.643908  527043 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:38:41.659376  527043 out.go:203] 
	W1115 09:38:41.662242  527043 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:38:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:38:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:38:41.662274  527043 out.go:285] * 
	* 
	W1115 09:38:41.670958  527043 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:38:41.674210  527043 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable ingress --alsologtostderr -v=1: exit status 11 (296.934137ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:38:41.747363  527086 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:38:41.748133  527086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:38:41.748144  527086 out.go:374] Setting ErrFile to fd 2...
	I1115 09:38:41.748149  527086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:38:41.748618  527086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:38:41.749041  527086 mustload.go:66] Loading cluster: addons-612806
	I1115 09:38:41.749775  527086 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:38:41.749790  527086 addons.go:607] checking whether the cluster is paused
	I1115 09:38:41.749928  527086 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:38:41.749947  527086 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:38:41.757781  527086 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:38:41.786739  527086 ssh_runner.go:195] Run: systemctl --version
	I1115 09:38:41.786803  527086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:38:41.803881  527086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:38:41.908315  527086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:38:41.908402  527086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:38:41.942695  527086 cri.go:89] found id: "6f4feb0c22ee13699be5d22a1ff8c75c6ac743150667fe873578b5577b77614d"
	I1115 09:38:41.942718  527086 cri.go:89] found id: "6f42dc4f06e686509ed5b164f621a1f11f4410009e1b2e39519873ca1a3d5663"
	I1115 09:38:41.942724  527086 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:38:41.942737  527086 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:38:41.942759  527086 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:38:41.942774  527086 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:38:41.942779  527086 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:38:41.942782  527086 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:38:41.942785  527086 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:38:41.942792  527086 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:38:41.942798  527086 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:38:41.942801  527086 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:38:41.942804  527086 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:38:41.942808  527086 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:38:41.942811  527086 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:38:41.942817  527086 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:38:41.942833  527086 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:38:41.942840  527086 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:38:41.942848  527086 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:38:41.942851  527086 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:38:41.942857  527086 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:38:41.942860  527086 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:38:41.942864  527086 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:38:41.942867  527086 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:38:41.942870  527086 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:38:41.942873  527086 cri.go:89] found id: ""
	I1115 09:38:41.942939  527086 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:38:41.957965  527086 out.go:203] 
	W1115 09:38:41.960802  527086 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:38:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:38:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:38:41.960839  527086 out.go:285] * 
	* 
	W1115 09:38:41.967567  527086 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:38:41.970641  527086 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-7hxzc" [1dcff055-a042-44b6-82be-c785e48ece2f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003594936s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (269.684621ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:36:15.648764  524864 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:36:15.649642  524864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:15.649655  524864 out.go:374] Setting ErrFile to fd 2...
	I1115 09:36:15.649660  524864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:15.649919  524864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:36:15.650208  524864 mustload.go:66] Loading cluster: addons-612806
	I1115 09:36:15.650558  524864 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:15.650575  524864 addons.go:607] checking whether the cluster is paused
	I1115 09:36:15.650676  524864 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:15.650691  524864 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:36:15.651145  524864 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:36:15.668719  524864 ssh_runner.go:195] Run: systemctl --version
	I1115 09:36:15.668783  524864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:36:15.685832  524864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:36:15.796221  524864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:36:15.796334  524864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:36:15.829302  524864 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:36:15.829380  524864 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:36:15.829400  524864 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:36:15.829423  524864 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:36:15.829452  524864 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:36:15.829473  524864 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:36:15.829499  524864 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:36:15.829521  524864 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:36:15.829553  524864 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:36:15.829569  524864 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:36:15.829574  524864 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:36:15.829577  524864 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:36:15.829580  524864 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:36:15.829583  524864 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:36:15.829586  524864 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:36:15.829591  524864 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:36:15.829622  524864 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:36:15.829628  524864 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:36:15.829631  524864 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:36:15.829634  524864 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:36:15.829638  524864 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:36:15.829641  524864 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:36:15.829644  524864 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:36:15.829647  524864 cri.go:89] found id: ""
	I1115 09:36:15.829702  524864 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:36:15.844907  524864 out.go:203] 
	W1115 09:36:15.847744  524864 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:36:15.847766  524864 out.go:285] * 
	* 
	W1115 09:36:15.854422  524864 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:36:15.857274  524864 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.399588ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004216505s
addons_test.go:463: (dbg) Run:  kubectl --context addons-612806 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (285.607071ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:36:09.363972  524775 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:36:09.364904  524775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:09.364925  524775 out.go:374] Setting ErrFile to fd 2...
	I1115 09:36:09.364938  524775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:09.365223  524775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:36:09.365532  524775 mustload.go:66] Loading cluster: addons-612806
	I1115 09:36:09.365986  524775 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:09.366007  524775 addons.go:607] checking whether the cluster is paused
	I1115 09:36:09.366114  524775 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:09.366130  524775 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:36:09.366597  524775 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:36:09.384732  524775 ssh_runner.go:195] Run: systemctl --version
	I1115 09:36:09.384792  524775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:36:09.403847  524775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:36:09.516398  524775 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:36:09.516480  524775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:36:09.556837  524775 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:36:09.556855  524775 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:36:09.556860  524775 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:36:09.556872  524775 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:36:09.556876  524775 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:36:09.556880  524775 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:36:09.556883  524775 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:36:09.556886  524775 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:36:09.556889  524775 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:36:09.556896  524775 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:36:09.556899  524775 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:36:09.556902  524775 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:36:09.556905  524775 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:36:09.556907  524775 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:36:09.556910  524775 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:36:09.556915  524775 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:36:09.556918  524775 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:36:09.556921  524775 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:36:09.556924  524775 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:36:09.556927  524775 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:36:09.556932  524775 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:36:09.556935  524775 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:36:09.556938  524775 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:36:09.556941  524775 cri.go:89] found id: ""
	I1115 09:36:09.556993  524775 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:36:09.571778  524775 out.go:203] 
	W1115 09:36:09.574729  524775 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:36:09.574759  524775 out.go:285] * 
	* 
	W1115 09:36:09.581407  524775 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:36:09.584220  524775 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1115 09:36:01.179821  516637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1115 09:36:01.184433  516637 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1115 09:36:01.184461  516637 kapi.go:107] duration metric: took 4.654271ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.66689ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-612806 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-612806 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [48b8a06a-f189-4560-b2bd-e86c3a26782f] Pending
helpers_test.go:352: "task-pv-pod" [48b8a06a-f189-4560-b2bd-e86c3a26782f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [48b8a06a-f189-4560-b2bd-e86c3a26782f] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004493187s
addons_test.go:572: (dbg) Run:  kubectl --context addons-612806 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-612806 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-612806 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-612806 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-612806 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-612806 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-612806 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [ba01f188-6a5d-4612-8749-c494c9644072] Pending
helpers_test.go:352: "task-pv-pod-restore" [ba01f188-6a5d-4612-8749-c494c9644072] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [ba01f188-6a5d-4612-8749-c494c9644072] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003183541s
addons_test.go:614: (dbg) Run:  kubectl --context addons-612806 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-612806 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-612806 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (350.162461ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:36:49.218185  525792 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:36:49.221234  525792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:49.221287  525792 out.go:374] Setting ErrFile to fd 2...
	I1115 09:36:49.221309  525792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:49.221648  525792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:36:49.221981  525792 mustload.go:66] Loading cluster: addons-612806
	I1115 09:36:49.222371  525792 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:49.222404  525792 addons.go:607] checking whether the cluster is paused
	I1115 09:36:49.222542  525792 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:49.222570  525792 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:36:49.223032  525792 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:36:49.241361  525792 ssh_runner.go:195] Run: systemctl --version
	I1115 09:36:49.241432  525792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:36:49.267741  525792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:36:49.389083  525792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:36:49.389174  525792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:36:49.435903  525792 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:36:49.435924  525792 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:36:49.435929  525792 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:36:49.435932  525792 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:36:49.435936  525792 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:36:49.435940  525792 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:36:49.435943  525792 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:36:49.435947  525792 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:36:49.435949  525792 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:36:49.435957  525792 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:36:49.435960  525792 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:36:49.435963  525792 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:36:49.435966  525792 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:36:49.435969  525792 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:36:49.435972  525792 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:36:49.435984  525792 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:36:49.435987  525792 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:36:49.435992  525792 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:36:49.435995  525792 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:36:49.435998  525792 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:36:49.436002  525792 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:36:49.436005  525792 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:36:49.436007  525792 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:36:49.436010  525792 cri.go:89] found id: ""
	I1115 09:36:49.436057  525792 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:36:49.472601  525792 out.go:203] 
	W1115 09:36:49.477287  525792 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:36:49.477383  525792 out.go:285] * 
	* 
	W1115 09:36:49.484063  525792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:36:49.493745  525792 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (290.852083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:36:49.568444  525846 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:36:49.569194  525846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:49.569228  525846 out.go:374] Setting ErrFile to fd 2...
	I1115 09:36:49.569248  525846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:49.569526  525846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:36:49.569916  525846 mustload.go:66] Loading cluster: addons-612806
	I1115 09:36:49.570340  525846 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:49.570383  525846 addons.go:607] checking whether the cluster is paused
	I1115 09:36:49.570518  525846 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:49.570544  525846 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:36:49.571027  525846 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:36:49.589042  525846 ssh_runner.go:195] Run: systemctl --version
	I1115 09:36:49.589114  525846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:36:49.611959  525846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:36:49.716807  525846 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:36:49.716907  525846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:36:49.753217  525846 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:36:49.753237  525846 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:36:49.753243  525846 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:36:49.753247  525846 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:36:49.753250  525846 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:36:49.753259  525846 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:36:49.753263  525846 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:36:49.753266  525846 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:36:49.753269  525846 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:36:49.753276  525846 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:36:49.753280  525846 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:36:49.753283  525846 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:36:49.753286  525846 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:36:49.753289  525846 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:36:49.753292  525846 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:36:49.753300  525846 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:36:49.753303  525846 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:36:49.753310  525846 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:36:49.753314  525846 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:36:49.753317  525846 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:36:49.753321  525846 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:36:49.753325  525846 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:36:49.753328  525846 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:36:49.753331  525846 cri.go:89] found id: ""
	I1115 09:36:49.753381  525846 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:36:49.773454  525846 out.go:203] 
	W1115 09:36:49.777188  525846 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:36:49.777213  525846 out.go:285] * 
	* 
	W1115 09:36:49.784061  525846 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:36:49.787707  525846 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (48.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-612806 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-612806 --alsologtostderr -v=1: exit status 11 (385.188017ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:36:00.604100  524097 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:36:00.604979  524097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:00.604992  524097 out.go:374] Setting ErrFile to fd 2...
	I1115 09:36:00.604997  524097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:00.605282  524097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:36:00.605581  524097 mustload.go:66] Loading cluster: addons-612806
	I1115 09:36:00.605990  524097 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:00.606012  524097 addons.go:607] checking whether the cluster is paused
	I1115 09:36:00.606114  524097 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:00.606129  524097 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:36:00.606555  524097 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:36:00.637947  524097 ssh_runner.go:195] Run: systemctl --version
	I1115 09:36:00.638021  524097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:36:00.678166  524097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:36:00.791089  524097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:36:00.791185  524097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:36:00.838539  524097 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:36:00.838556  524097 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:36:00.838561  524097 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:36:00.838565  524097 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:36:00.838568  524097 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:36:00.838572  524097 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:36:00.838575  524097 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:36:00.838579  524097 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:36:00.838582  524097 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:36:00.838587  524097 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:36:00.838590  524097 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:36:00.838593  524097 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:36:00.838596  524097 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:36:00.838599  524097 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:36:00.838602  524097 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:36:00.838607  524097 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:36:00.838610  524097 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:36:00.838615  524097 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:36:00.838618  524097 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:36:00.838621  524097 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:36:00.838625  524097 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:36:00.838630  524097 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:36:00.838633  524097 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:36:00.838636  524097 cri.go:89] found id: ""
	I1115 09:36:00.838685  524097 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:36:00.861990  524097 out.go:203] 
	W1115 09:36:00.864860  524097 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:36:00.864882  524097 out.go:285] * 
	* 
	W1115 09:36:00.872299  524097 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:36:00.875303  524097 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-612806 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-612806
helpers_test.go:243: (dbg) docker inspect addons-612806:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430",
	        "Created": "2025-11-15T09:33:13.482696763Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:33:13.542000098Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/hostname",
	        "HostsPath": "/var/lib/docker/containers/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/hosts",
	        "LogPath": "/var/lib/docker/containers/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430-json.log",
	        "Name": "/addons-612806",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-612806:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-612806",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430",
	                "LowerDir": "/var/lib/docker/overlay2/780588ea12473a9083fc48c5b25195aa8462a5461ebe02fd908aafa2897e91a8-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/780588ea12473a9083fc48c5b25195aa8462a5461ebe02fd908aafa2897e91a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/780588ea12473a9083fc48c5b25195aa8462a5461ebe02fd908aafa2897e91a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/780588ea12473a9083fc48c5b25195aa8462a5461ebe02fd908aafa2897e91a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-612806",
	                "Source": "/var/lib/docker/volumes/addons-612806/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-612806",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-612806",
	                "name.minikube.sigs.k8s.io": "addons-612806",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d0509765267724c1eaf51396ff56b0e41c7fb1cb402b4a0332ae82c3be717b7",
	            "SandboxKey": "/var/run/docker/netns/7d0509765267",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-612806": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:56:19:f8:d0:55",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ac6b3eeed3ac962be623bbf517b0be3ce2c94e3e1771253d91fbecf4ee0b09a9",
	                    "EndpointID": "da244236754e67436b73e71d65d108b376261c79d6f38e8b3905dc985deb8912",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-612806",
	                        "438186eb0f36"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-612806 -n addons-612806
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-612806 logs -n 25: (1.625861657s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-446723 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-446723   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ delete  │ -p download-only-446723                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-446723   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ start   │ -o=json --download-only -p download-only-409645 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-409645   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ delete  │ -p download-only-409645                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-409645   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ delete  │ -p download-only-446723                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-446723   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ delete  │ -p download-only-409645                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-409645   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ start   │ --download-only -p download-docker-650018 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-650018 │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ delete  │ -p download-docker-650018                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-650018 │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ start   │ --download-only -p binary-mirror-339675 --alsologtostderr --binary-mirror http://127.0.0.1:41649 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-339675   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ delete  │ -p binary-mirror-339675                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-339675   │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ addons  │ enable dashboard -p addons-612806                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ addons  │ disable dashboard -p addons-612806                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ start   │ -p addons-612806 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:35 UTC │
	│ addons  │ addons-612806 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ addons  │ addons-612806 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ addons  │ addons-612806 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ addons  │ addons-612806 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ ip      │ addons-612806 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │ 15 Nov 25 09:35 UTC │
	│ addons  │ addons-612806 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │                     │
	│ ssh     │ addons-612806 ssh cat /opt/local-path-provisioner/pvc-656ebd50-b53f-48f0-84f4-4943fda1a953_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:35 UTC │ 15 Nov 25 09:36 UTC │
	│ addons  │ addons-612806 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ enable headlamp -p addons-612806 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	│ addons  │ addons-612806 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-612806          │ jenkins │ v1.37.0 │ 15 Nov 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:32:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:32:47.921727  517398 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:32:47.921836  517398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:32:47.921850  517398 out.go:374] Setting ErrFile to fd 2...
	I1115 09:32:47.921856  517398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:32:47.922116  517398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:32:47.922562  517398 out.go:368] Setting JSON to false
	I1115 09:32:47.923398  517398 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15319,"bootTime":1763183849,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 09:32:47.923461  517398 start.go:143] virtualization:  
	I1115 09:32:47.926654  517398 out.go:179] * [addons-612806] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 09:32:47.930414  517398 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:32:47.930497  517398 notify.go:221] Checking for updates...
	I1115 09:32:47.935996  517398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:32:47.938886  517398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:32:47.941693  517398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 09:32:47.944556  517398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 09:32:47.947504  517398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:32:47.950540  517398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:32:47.982124  517398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 09:32:47.982250  517398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:32:48.044103  517398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-15 09:32:48.034751487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:32:48.044246  517398 docker.go:319] overlay module found
	I1115 09:32:48.047391  517398 out.go:179] * Using the docker driver based on user configuration
	I1115 09:32:48.050266  517398 start.go:309] selected driver: docker
	I1115 09:32:48.050286  517398 start.go:930] validating driver "docker" against <nil>
	I1115 09:32:48.050300  517398 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:32:48.051043  517398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:32:48.106634  517398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-15 09:32:48.097646606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:32:48.106795  517398 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:32:48.107060  517398 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:32:48.109995  517398 out.go:179] * Using Docker driver with root privileges
	I1115 09:32:48.112897  517398 cni.go:84] Creating CNI manager for ""
	I1115 09:32:48.112961  517398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:32:48.112975  517398 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:32:48.113054  517398 start.go:353] cluster config:
	{Name:addons-612806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1115 09:32:48.116089  517398 out.go:179] * Starting "addons-612806" primary control-plane node in "addons-612806" cluster
	I1115 09:32:48.118873  517398 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:32:48.121725  517398 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:32:48.124544  517398 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:32:48.124594  517398 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 09:32:48.124609  517398 cache.go:65] Caching tarball of preloaded images
	I1115 09:32:48.124616  517398 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:32:48.124692  517398 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 09:32:48.124703  517398 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:32:48.125051  517398 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/config.json ...
	I1115 09:32:48.125082  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/config.json: {Name:mk63094cae3e06c4d6bba640c475a86257cf6dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:32:48.140410  517398 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:32:48.140538  517398 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:32:48.140558  517398 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 09:32:48.140563  517398 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 09:32:48.140571  517398 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 09:32:48.140577  517398 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1115 09:33:05.954667  517398 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1115 09:33:05.954707  517398 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:33:05.954737  517398 start.go:360] acquireMachinesLock for addons-612806: {Name:mk9f453cd28739ad7906c1b688d41cb5ec60c803 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:33:05.954863  517398 start.go:364] duration metric: took 107.944µs to acquireMachinesLock for "addons-612806"
	I1115 09:33:05.954890  517398 start.go:93] Provisioning new machine with config: &{Name:addons-612806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:05.954964  517398 start.go:125] createHost starting for "" (driver="docker")
	I1115 09:33:05.958411  517398 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1115 09:33:05.958658  517398 start.go:159] libmachine.API.Create for "addons-612806" (driver="docker")
	I1115 09:33:05.958706  517398 client.go:173] LocalClient.Create starting
	I1115 09:33:05.958840  517398 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem
	I1115 09:33:06.449363  517398 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem
	I1115 09:33:06.750852  517398 cli_runner.go:164] Run: docker network inspect addons-612806 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 09:33:06.769699  517398 cli_runner.go:211] docker network inspect addons-612806 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 09:33:06.769781  517398 network_create.go:284] running [docker network inspect addons-612806] to gather additional debugging logs...
	I1115 09:33:06.769802  517398 cli_runner.go:164] Run: docker network inspect addons-612806
	W1115 09:33:06.787494  517398 cli_runner.go:211] docker network inspect addons-612806 returned with exit code 1
	I1115 09:33:06.787525  517398 network_create.go:287] error running [docker network inspect addons-612806]: docker network inspect addons-612806: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-612806 not found
	I1115 09:33:06.787552  517398 network_create.go:289] output of [docker network inspect addons-612806]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-612806 not found
	
	** /stderr **
	I1115 09:33:06.787651  517398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:06.805119  517398 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a7170}
	I1115 09:33:06.805167  517398 network_create.go:124] attempt to create docker network addons-612806 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 09:33:06.805223  517398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-612806 addons-612806
	I1115 09:33:06.861275  517398 network_create.go:108] docker network addons-612806 192.168.49.0/24 created
	I1115 09:33:06.861310  517398 kic.go:121] calculated static IP "192.168.49.2" for the "addons-612806" container
	I1115 09:33:06.861383  517398 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 09:33:06.876250  517398 cli_runner.go:164] Run: docker volume create addons-612806 --label name.minikube.sigs.k8s.io=addons-612806 --label created_by.minikube.sigs.k8s.io=true
	I1115 09:33:06.893958  517398 oci.go:103] Successfully created a docker volume addons-612806
	I1115 09:33:06.894047  517398 cli_runner.go:164] Run: docker run --rm --name addons-612806-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-612806 --entrypoint /usr/bin/test -v addons-612806:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 09:33:09.001478  517398 cli_runner.go:217] Completed: docker run --rm --name addons-612806-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-612806 --entrypoint /usr/bin/test -v addons-612806:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (2.107383122s)
	I1115 09:33:09.001517  517398 oci.go:107] Successfully prepared a docker volume addons-612806
	I1115 09:33:09.001582  517398 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:09.001637  517398 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 09:33:09.001708  517398 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-612806:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 09:33:13.415738  517398 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-612806:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413986901s)
	I1115 09:33:13.415771  517398 kic.go:203] duration metric: took 4.414130905s to extract preloaded images to volume ...
	W1115 09:33:13.415902  517398 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 09:33:13.416016  517398 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 09:33:13.468669  517398 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-612806 --name addons-612806 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-612806 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-612806 --network addons-612806 --ip 192.168.49.2 --volume addons-612806:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 09:33:13.753264  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Running}}
	I1115 09:33:13.778203  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:13.809391  517398 cli_runner.go:164] Run: docker exec addons-612806 stat /var/lib/dpkg/alternatives/iptables
	I1115 09:33:13.860669  517398 oci.go:144] the created container "addons-612806" has a running status.
	I1115 09:33:13.860697  517398 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa...
	I1115 09:33:14.068157  517398 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 09:33:14.115128  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:14.137779  517398 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 09:33:14.137797  517398 kic_runner.go:114] Args: [docker exec --privileged addons-612806 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 09:33:14.227504  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:14.247839  517398 machine.go:94] provisionDockerMachine start ...
	I1115 09:33:14.247934  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:14.265412  517398 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:14.265772  517398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1115 09:33:14.265789  517398 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:33:14.266361  517398 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50302->127.0.0.1:33498: read: connection reset by peer
	I1115 09:33:17.417177  517398 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-612806
	
	I1115 09:33:17.417203  517398 ubuntu.go:182] provisioning hostname "addons-612806"
	I1115 09:33:17.417266  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:17.435189  517398 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:17.435497  517398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1115 09:33:17.435513  517398 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-612806 && echo "addons-612806" | sudo tee /etc/hostname
	I1115 09:33:17.594747  517398 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-612806
	
	I1115 09:33:17.594823  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:17.612484  517398 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:17.612798  517398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1115 09:33:17.612824  517398 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-612806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-612806/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-612806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:33:17.765683  517398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:33:17.765747  517398 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 09:33:17.765783  517398 ubuntu.go:190] setting up certificates
	I1115 09:33:17.765793  517398 provision.go:84] configureAuth start
	I1115 09:33:17.765854  517398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-612806
	I1115 09:33:17.782355  517398 provision.go:143] copyHostCerts
	I1115 09:33:17.782429  517398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 09:33:17.782542  517398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 09:33:17.782602  517398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 09:33:17.782657  517398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.addons-612806 san=[127.0.0.1 192.168.49.2 addons-612806 localhost minikube]
	I1115 09:33:18.076496  517398 provision.go:177] copyRemoteCerts
	I1115 09:33:18.076564  517398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:33:18.076613  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.095650  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.201537  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:33:18.220056  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:33:18.237223  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:33:18.254443  517398 provision.go:87] duration metric: took 488.624159ms to configureAuth
	I1115 09:33:18.254469  517398 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:33:18.254653  517398 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:18.254765  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.271576  517398 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:18.271888  517398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1115 09:33:18.271909  517398 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:33:18.532098  517398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:33:18.532185  517398 machine.go:97] duration metric: took 4.284309468s to provisionDockerMachine
	I1115 09:33:18.532228  517398 client.go:176] duration metric: took 12.573496901s to LocalClient.Create
	I1115 09:33:18.532282  517398 start.go:167] duration metric: took 12.573625833s to libmachine.API.Create "addons-612806"
	I1115 09:33:18.532312  517398 start.go:293] postStartSetup for "addons-612806" (driver="docker")
	I1115 09:33:18.532336  517398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:33:18.532440  517398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:33:18.532550  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.551054  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.657690  517398 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:33:18.660893  517398 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:33:18.660922  517398 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:33:18.660934  517398 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 09:33:18.660998  517398 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 09:33:18.661027  517398 start.go:296] duration metric: took 128.697178ms for postStartSetup
	I1115 09:33:18.661336  517398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-612806
	I1115 09:33:18.677397  517398 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/config.json ...
	I1115 09:33:18.677908  517398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:33:18.677969  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.694177  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.794973  517398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:33:18.799787  517398 start.go:128] duration metric: took 12.844807465s to createHost
	I1115 09:33:18.799812  517398 start.go:83] releasing machines lock for "addons-612806", held for 12.844939161s
	I1115 09:33:18.799901  517398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-612806
	I1115 09:33:18.816954  517398 ssh_runner.go:195] Run: cat /version.json
	I1115 09:33:18.817021  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.817312  517398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:33:18.817368  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:18.839313  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.847284  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:18.941322  517398 ssh_runner.go:195] Run: systemctl --version
	I1115 09:33:19.034552  517398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:33:19.069255  517398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:33:19.073580  517398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:33:19.073669  517398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:33:19.101444  517398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 09:33:19.101472  517398 start.go:496] detecting cgroup driver to use...
	I1115 09:33:19.101504  517398 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 09:33:19.101553  517398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:33:19.118505  517398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:33:19.131953  517398 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:33:19.132018  517398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:33:19.149518  517398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:33:19.168035  517398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:33:19.291841  517398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:33:19.415217  517398 docker.go:234] disabling docker service ...
	I1115 09:33:19.415384  517398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:33:19.437936  517398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:33:19.451712  517398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:33:19.562751  517398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:33:19.693859  517398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:33:19.708277  517398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:33:19.722241  517398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:33:19.722315  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.730811  517398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 09:33:19.730880  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.739749  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.748038  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.757020  517398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:33:19.765640  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.774629  517398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.788433  517398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:19.797397  517398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:33:19.805084  517398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:33:19.812627  517398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:19.920312  517398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:20.046497  517398 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:20.046635  517398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:20.050756  517398 start.go:564] Will wait 60s for crictl version
	I1115 09:33:20.050865  517398 ssh_runner.go:195] Run: which crictl
	I1115 09:33:20.054616  517398 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:20.084442  517398 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:20.084644  517398 ssh_runner.go:195] Run: crio --version
	I1115 09:33:20.116145  517398 ssh_runner.go:195] Run: crio --version
	I1115 09:33:20.147765  517398 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:20.150791  517398 cli_runner.go:164] Run: docker network inspect addons-612806 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:20.167992  517398 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:20.172069  517398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:20.181986  517398 kubeadm.go:884] updating cluster {Name:addons-612806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:33:20.182120  517398 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:20.182178  517398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:33:20.219935  517398 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:33:20.219960  517398 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:33:20.220021  517398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:33:20.247339  517398 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:33:20.247364  517398 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:33:20.247374  517398 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 09:33:20.247462  517398 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-612806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:20.247554  517398 ssh_runner.go:195] Run: crio config
	I1115 09:33:20.299720  517398 cni.go:84] Creating CNI manager for ""
	I1115 09:33:20.299746  517398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:33:20.299769  517398 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:33:20.299796  517398 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-612806 NodeName:addons-612806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:33:20.299925  517398 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-612806"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:33:20.300000  517398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:20.307692  517398 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:20.307795  517398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:33:20.315195  517398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:20.327488  517398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:20.339935  517398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1115 09:33:20.355338  517398 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:20.359287  517398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:20.369004  517398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:20.476582  517398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:20.492435  517398 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806 for IP: 192.168.49.2
	I1115 09:33:20.492468  517398 certs.go:195] generating shared ca certs ...
	I1115 09:33:20.492484  517398 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:20.492662  517398 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 09:33:20.799976  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt ...
	I1115 09:33:20.800013  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt: {Name:mk70893942d6e5c2da13e34d090b8424f8dc0738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:20.800253  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key ...
	I1115 09:33:20.800269  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key: {Name:mk18fb438bed5d4ced16b917b9ea2ab121395897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:20.800362  517398 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 09:33:21.220692  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt ...
	I1115 09:33:21.220722  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt: {Name:mkd960b9e97f7373aafc1d971778195865fc5ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.220902  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key ...
	I1115 09:33:21.220915  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key: {Name:mk4d75450c4a986fcc17d4d30847824e0ed28462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.221008  517398 certs.go:257] generating profile certs ...
	I1115 09:33:21.221075  517398 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.key
	I1115 09:33:21.221094  517398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt with IP's: []
	I1115 09:33:21.475433  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt ...
	I1115 09:33:21.475466  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: {Name:mk0d77c5fb4b349381e4035e01d6f84b4212981f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.475650  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.key ...
	I1115 09:33:21.475663  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.key: {Name:mkcf973be359fb928e69db1eb448a2e1aea313a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.475777  517398 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key.3adf97c9
	I1115 09:33:21.475799  517398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt.3adf97c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1115 09:33:21.713262  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt.3adf97c9 ...
	I1115 09:33:21.713300  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt.3adf97c9: {Name:mkf1b0c2b5c3f7a2845479f6c216c14594a7a4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.713467  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key.3adf97c9 ...
	I1115 09:33:21.713482  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key.3adf97c9: {Name:mk13756454d634edd32ed6b4903dd27d1a7477e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.713567  517398 certs.go:382] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt.3adf97c9 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt
	I1115 09:33:21.713672  517398 certs.go:386] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key.3adf97c9 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key
	I1115 09:33:21.713727  517398 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.key
	I1115 09:33:21.713746  517398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.crt with IP's: []
	I1115 09:33:21.913701  517398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.crt ...
	I1115 09:33:21.913731  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.crt: {Name:mk1f5058333308a06bc34648e75681e4f6ab5d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.913917  517398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.key ...
	I1115 09:33:21.913933  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.key: {Name:mkac56f364f1c4bf572f7f529b4d070437967526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:21.914123  517398 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 09:33:21.914164  517398 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:21.914193  517398 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:21.914231  517398 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 09:33:21.914784  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:21.932647  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:21.950636  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:21.967426  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:21.983907  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:33:22.001629  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 09:33:22.021241  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:33:22.039727  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:33:22.058312  517398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:22.079337  517398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:33:22.095309  517398 ssh_runner.go:195] Run: openssl version
	I1115 09:33:22.102647  517398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:22.111338  517398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:22.115537  517398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:22.115671  517398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:22.159071  517398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:22.167574  517398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:22.171210  517398 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:33:22.171261  517398 kubeadm.go:401] StartCluster: {Name:addons-612806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-612806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:33:22.171348  517398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:33:22.171421  517398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:33:22.200935  517398 cri.go:89] found id: ""
	I1115 09:33:22.201012  517398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:33:22.208715  517398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:33:22.216559  517398 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:33:22.216625  517398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:33:22.224580  517398 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:33:22.224603  517398 kubeadm.go:158] found existing configuration files:
	
	I1115 09:33:22.224659  517398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:33:22.232479  517398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:33:22.232546  517398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:33:22.239825  517398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:33:22.248185  517398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:33:22.248299  517398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:33:22.255766  517398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:33:22.263357  517398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:33:22.263422  517398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:33:22.270904  517398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:33:22.278328  517398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:33:22.278405  517398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:33:22.285531  517398 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:33:22.344032  517398 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 09:33:22.344383  517398 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 09:33:22.409729  517398 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:33:39.228806  517398 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:33:39.228868  517398 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:33:39.228962  517398 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:33:39.229039  517398 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 09:33:39.229079  517398 kubeadm.go:319] OS: Linux
	I1115 09:33:39.229130  517398 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:33:39.229185  517398 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 09:33:39.229238  517398 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:33:39.229292  517398 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:33:39.229346  517398 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:33:39.229402  517398 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:33:39.229457  517398 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:33:39.229514  517398 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:33:39.229568  517398 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 09:33:39.229669  517398 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:33:39.229772  517398 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:33:39.229865  517398 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:33:39.229930  517398 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:33:39.232840  517398 out.go:252]   - Generating certificates and keys ...
	I1115 09:33:39.232937  517398 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:33:39.233007  517398 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:33:39.233082  517398 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:33:39.233144  517398 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:33:39.233210  517398 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:33:39.233265  517398 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:33:39.233323  517398 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:33:39.233443  517398 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-612806 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:33:39.233500  517398 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:33:39.233643  517398 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-612806 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:33:39.233714  517398 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:33:39.233851  517398 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:33:39.233922  517398 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:33:39.233993  517398 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:33:39.234059  517398 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:33:39.234125  517398 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:33:39.234196  517398 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:33:39.234273  517398 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:33:39.234345  517398 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:33:39.234433  517398 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:33:39.234518  517398 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:33:39.237443  517398 out.go:252]   - Booting up control plane ...
	I1115 09:33:39.237543  517398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:33:39.237670  517398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:33:39.237785  517398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:33:39.237916  517398 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:33:39.238049  517398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:33:39.238178  517398 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:33:39.238273  517398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:33:39.238323  517398 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:33:39.238465  517398 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:33:39.238588  517398 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:33:39.238678  517398 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001342507s
	I1115 09:33:39.238833  517398 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:33:39.238954  517398 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 09:33:39.239065  517398 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:33:39.239153  517398 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:33:39.239250  517398 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.927007073s
	I1115 09:33:39.239367  517398 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.135548899s
	I1115 09:33:39.239455  517398 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501476454s
	I1115 09:33:39.239612  517398 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:33:39.239792  517398 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:33:39.239859  517398 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:33:39.240058  517398 kubeadm.go:319] [mark-control-plane] Marking the node addons-612806 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:33:39.240122  517398 kubeadm.go:319] [bootstrap-token] Using token: g7gars.xwgdud00ybfiyvvb
	I1115 09:33:39.243230  517398 out.go:252]   - Configuring RBAC rules ...
	I1115 09:33:39.243397  517398 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:33:39.243513  517398 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:33:39.243707  517398 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:33:39.243884  517398 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:33:39.244013  517398 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:33:39.244131  517398 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:33:39.244274  517398 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:33:39.244325  517398 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:33:39.244377  517398 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:33:39.244392  517398 kubeadm.go:319] 
	I1115 09:33:39.244458  517398 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:33:39.244468  517398 kubeadm.go:319] 
	I1115 09:33:39.244560  517398 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:33:39.244574  517398 kubeadm.go:319] 
	I1115 09:33:39.244607  517398 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:33:39.244686  517398 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:33:39.244755  517398 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:33:39.244760  517398 kubeadm.go:319] 
	I1115 09:33:39.244826  517398 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:33:39.244838  517398 kubeadm.go:319] 
	I1115 09:33:39.244902  517398 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:33:39.244917  517398 kubeadm.go:319] 
	I1115 09:33:39.244981  517398 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:33:39.245082  517398 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:33:39.245167  517398 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:33:39.245174  517398 kubeadm.go:319] 
	I1115 09:33:39.245275  517398 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:33:39.245377  517398 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:33:39.245387  517398 kubeadm.go:319] 
	I1115 09:33:39.245484  517398 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token g7gars.xwgdud00ybfiyvvb \
	I1115 09:33:39.245712  517398 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 09:33:39.245739  517398 kubeadm.go:319] 	--control-plane 
	I1115 09:33:39.245746  517398 kubeadm.go:319] 
	I1115 09:33:39.245836  517398 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:33:39.245847  517398 kubeadm.go:319] 
	I1115 09:33:39.245935  517398 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token g7gars.xwgdud00ybfiyvvb \
	I1115 09:33:39.246067  517398 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 09:33:39.246080  517398 cni.go:84] Creating CNI manager for ""
	I1115 09:33:39.246088  517398 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:33:39.249194  517398 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 09:33:39.252118  517398 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:33:39.256736  517398 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 09:33:39.256759  517398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:33:39.270699  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:33:39.553049  517398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:33:39.553251  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:39.553371  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-612806 minikube.k8s.io/updated_at=2025_11_15T09_33_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=addons-612806 minikube.k8s.io/primary=true
	I1115 09:33:39.695839  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:39.695906  517398 ops.go:34] apiserver oom_adj: -16
	I1115 09:33:40.196580  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:40.696449  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:41.195959  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:41.696866  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:42.196288  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:42.696296  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:43.196584  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:43.696796  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:44.196346  517398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:33:44.280805  517398 kubeadm.go:1114] duration metric: took 4.727601451s to wait for elevateKubeSystemPrivileges
	I1115 09:33:44.280837  517398 kubeadm.go:403] duration metric: took 22.109581087s to StartCluster
	I1115 09:33:44.280856  517398 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:44.280974  517398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:33:44.281348  517398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:44.281552  517398 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:44.281718  517398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:33:44.281962  517398 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:44.281947  517398 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 09:33:44.282061  517398 addons.go:70] Setting yakd=true in profile "addons-612806"
	I1115 09:33:44.282073  517398 addons.go:70] Setting inspektor-gadget=true in profile "addons-612806"
	I1115 09:33:44.282086  517398 addons.go:70] Setting metrics-server=true in profile "addons-612806"
	I1115 09:33:44.282088  517398 addons.go:239] Setting addon inspektor-gadget=true in "addons-612806"
	I1115 09:33:44.282094  517398 addons.go:239] Setting addon metrics-server=true in "addons-612806"
	I1115 09:33:44.282111  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.282116  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.282225  517398 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-612806"
	I1115 09:33:44.282233  517398 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-612806"
	I1115 09:33:44.282247  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.282595  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.282688  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.283339  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.285789  517398 addons.go:70] Setting registry=true in profile "addons-612806"
	I1115 09:33:44.285860  517398 addons.go:239] Setting addon registry=true in "addons-612806"
	I1115 09:33:44.286313  517398 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-612806"
	I1115 09:33:44.286352  517398 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-612806"
	I1115 09:33:44.286394  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.286163  517398 addons.go:70] Setting registry-creds=true in profile "addons-612806"
	I1115 09:33:44.287325  517398 addons.go:239] Setting addon registry-creds=true in "addons-612806"
	I1115 09:33:44.287355  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.287801  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.298045  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.286327  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.298679  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.286175  517398 addons.go:70] Setting storage-provisioner=true in profile "addons-612806"
	I1115 09:33:44.286179  517398 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-612806"
	I1115 09:33:44.286183  517398 addons.go:70] Setting volcano=true in profile "addons-612806"
	I1115 09:33:44.286190  517398 addons.go:70] Setting volumesnapshots=true in profile "addons-612806"
	I1115 09:33:44.286233  517398 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:44.298708  517398 addons.go:70] Setting cloud-spanner=true in profile "addons-612806"
	I1115 09:33:44.298926  517398 addons.go:239] Setting addon cloud-spanner=true in "addons-612806"
	I1115 09:33:44.298976  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.299501  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.308736  517398 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-612806"
	I1115 09:33:44.309167  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.298717  517398 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-612806"
	I1115 09:33:44.353004  517398 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-612806"
	I1115 09:33:44.353051  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.353509  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.298738  517398 addons.go:70] Setting default-storageclass=true in profile "addons-612806"
	I1115 09:33:44.380926  517398 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-612806"
	I1115 09:33:44.298742  517398 addons.go:70] Setting gcp-auth=true in profile "addons-612806"
	I1115 09:33:44.298746  517398 addons.go:70] Setting ingress=true in profile "addons-612806"
	I1115 09:33:44.298749  517398 addons.go:70] Setting ingress-dns=true in profile "addons-612806"
	I1115 09:33:44.282076  517398 addons.go:239] Setting addon yakd=true in "addons-612806"
	I1115 09:33:44.331906  517398 addons.go:239] Setting addon volcano=true in "addons-612806"
	I1115 09:33:44.331924  517398 addons.go:239] Setting addon volumesnapshots=true in "addons-612806"
	I1115 09:33:44.340778  517398 addons.go:239] Setting addon storage-provisioner=true in "addons-612806"
	I1115 09:33:44.381738  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.410261  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.418848  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.410599  517398 mustload.go:66] Loading cluster: addons-612806
	I1115 09:33:44.434165  517398 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:44.434499  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.442419  517398 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 09:33:44.449785  517398 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:33:44.449864  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 09:33:44.449971  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.410612  517398 addons.go:239] Setting addon ingress=true in "addons-612806"
	I1115 09:33:44.455731  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.457290  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.410620  517398 addons.go:239] Setting addon ingress-dns=true in "addons-612806"
	I1115 09:33:44.470111  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.470716  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.410714  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.473426  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.474540  517398 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 09:33:44.477571  517398 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:33:44.477687  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1115 09:33:44.477798  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.481319  517398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:44.486713  517398 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 09:33:44.410731  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.492657  517398 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 09:33:44.492682  517398 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 09:33:44.492762  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.410743  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.501296  517398 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 09:33:44.502652  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.511444  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.537192  517398 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-612806"
	I1115 09:33:44.537235  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.537656  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.539621  517398 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 09:33:44.542725  517398 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:33:44.542751  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 09:33:44.542814  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.565694  517398 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:33:44.565714  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 09:33:44.565780  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.579738  517398 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 09:33:44.582717  517398 out.go:179]   - Using image docker.io/registry:3.0.0
	I1115 09:33:44.586139  517398 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 09:33:44.586171  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 09:33:44.586237  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.606856  517398 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 09:33:44.610045  517398 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 09:33:44.610076  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 09:33:44.610143  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.624714  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.646654  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.650087  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 09:33:44.650881  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.686561  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 09:33:44.697961  517398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1115 09:33:44.699506  517398 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 09:33:44.700895  517398 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 09:33:44.701793  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.705407  517398 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:33:44.705460  517398 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:33:44.709710  517398 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:33:44.709814  517398 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:33:44.709825  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:33:44.709898  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.712800  517398 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:33:44.712829  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 09:33:44.712893  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.719880  517398 addons.go:239] Setting addon default-storageclass=true in "addons-612806"
	I1115 09:33:44.719922  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:44.720341  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:44.729486  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 09:33:44.734208  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 09:33:44.737053  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 09:33:44.740228  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 09:33:44.743322  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 09:33:44.748884  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 09:33:44.756178  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 09:33:44.756203  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 09:33:44.756280  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.785705  517398 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 09:33:44.788433  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 09:33:44.788460  517398 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 09:33:44.788534  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.814554  517398 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 09:33:44.819308  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 09:33:44.819334  517398 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 09:33:44.819437  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.827933  517398 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 09:33:44.834387  517398 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:33:44.834415  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 09:33:44.834487  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.862725  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.874686  517398 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 09:33:44.877715  517398 out.go:179]   - Using image docker.io/busybox:stable
	I1115 09:33:44.889696  517398 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:33:44.889718  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 09:33:44.889794  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.895013  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.917831  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.936291  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.940483  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.942786  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.954916  517398 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:33:44.954951  517398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:33:44.955025  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:44.977767  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:44.994345  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:45.021802  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	W1115 09:33:45.029857  517398 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:33:45.029899  517398 retry.go:31] will retry after 215.209683ms: ssh: handshake failed: EOF
	I1115 09:33:45.032243  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:45.035118  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:45.052970  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	W1115 09:33:45.057622  517398 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:33:45.057664  517398 retry.go:31] will retry after 142.954018ms: ssh: handshake failed: EOF
	I1115 09:33:45.081327  517398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1115 09:33:45.247017  517398 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:33:45.247105  517398 retry.go:31] will retry after 489.43704ms: ssh: handshake failed: EOF
	I1115 09:33:45.417575  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:33:45.531585  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:33:45.537705  517398 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 09:33:45.537765  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 09:33:45.567309  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:33:45.601724  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:33:45.638849  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 09:33:45.638924  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 09:33:45.658780  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:33:45.695120  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:33:45.698304  517398 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 09:33:45.698325  517398 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 09:33:45.714305  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:33:45.714588  517398 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 09:33:45.714602  517398 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 09:33:45.716903  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 09:33:45.791076  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 09:33:45.791153  517398 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 09:33:45.810296  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 09:33:45.810371  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 09:33:45.828281  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:33:45.834698  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:33:45.872977  517398 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:33:45.873049  517398 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 09:33:45.876876  517398 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:33:45.876946  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 09:33:45.922986  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 09:33:45.923074  517398 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 09:33:45.938551  517398 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.240549625s)
	I1115 09:33:45.938718  517398 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 09:33:45.940169  517398 node_ready.go:35] waiting up to 6m0s for node "addons-612806" to be "Ready" ...
	I1115 09:33:45.962032  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 09:33:45.962107  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 09:33:46.141500  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:33:46.166354  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:33:46.176231  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 09:33:46.176308  517398 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 09:33:46.223745  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 09:33:46.223823  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 09:33:46.322182  517398 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:33:46.322252  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 09:33:46.353004  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 09:33:46.353078  517398 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 09:33:46.446539  517398 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-612806" context rescaled to 1 replicas
	I1115 09:33:46.524758  517398 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 09:33:46.524780  517398 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 09:33:46.540083  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:33:46.564020  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 09:33:46.564038  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 09:33:46.852832  517398 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 09:33:46.852909  517398 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 09:33:46.893810  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 09:33:46.893885  517398 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 09:33:47.159494  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 09:33:47.159569  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 09:33:47.223789  517398 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 09:33:47.223865  517398 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 09:33:47.334646  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 09:33:47.334722  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 09:33:47.388244  517398 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:33:47.388334  517398 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 09:33:47.404619  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:33:47.432072  517398 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 09:33:47.432153  517398 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 09:33:47.448423  517398 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:33:47.448500  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 09:33:47.511339  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1115 09:33:47.944249  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:49.491074  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.959414557s)
	I1115 09:33:49.491239  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.923859239s)
	I1115 09:33:49.491283  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.073329296s)
	W1115 09:33:49.959268  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:50.480200  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.821334026s)
	I1115 09:33:50.480266  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.785128291s)
	I1115 09:33:50.480311  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.765988937s)
	I1115 09:33:50.480372  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.763412661s)
	I1115 09:33:50.480425  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.652074929s)
	I1115 09:33:50.480666  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.645906562s)
	I1115 09:33:50.480772  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.339199417s)
	I1115 09:33:50.480790  517398 addons.go:480] Verifying addon registry=true in "addons-612806"
	I1115 09:33:50.480993  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.879187595s)
	I1115 09:33:50.481016  517398 addons.go:480] Verifying addon ingress=true in "addons-612806"
	I1115 09:33:50.481277  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.314852365s)
	I1115 09:33:50.481306  517398 addons.go:480] Verifying addon metrics-server=true in "addons-612806"
	I1115 09:33:50.481345  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.941240735s)
	I1115 09:33:50.484232  517398 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-612806 service yakd-dashboard -n yakd-dashboard
	
	I1115 09:33:50.484318  517398 out.go:179] * Verifying ingress addon...
	I1115 09:33:50.484368  517398 out.go:179] * Verifying registry addon...
	I1115 09:33:50.488610  517398 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 09:33:50.489472  517398 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 09:33:50.500681  517398 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:33:50.500701  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:50.501201  517398 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 09:33:50.501216  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1115 09:33:50.509399  517398 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1115 09:33:50.823698  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.418980401s)
	I1115 09:33:50.823729  517398 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-612806"
	I1115 09:33:50.824016  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.312592568s)
	W1115 09:33:50.824072  517398 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:33:50.824105  517398 retry.go:31] will retry after 169.079578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:33:50.826966  517398 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 09:33:50.830679  517398 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 09:33:50.841283  517398 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:33:50.841315  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:50.992510  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:50.992923  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:50.993892  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:33:51.334994  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:51.492711  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:51.493019  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:51.834868  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:51.993415  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:51.994923  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:52.335112  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:52.381546  517398 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 09:33:52.381667  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:52.398641  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	W1115 09:33:52.443315  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:52.492956  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:52.493187  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:52.510648  517398 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 09:33:52.523479  517398 addons.go:239] Setting addon gcp-auth=true in "addons-612806"
	I1115 09:33:52.523531  517398 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:33:52.523998  517398 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:33:52.541012  517398 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 09:33:52.541072  517398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:33:52.561698  517398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:33:52.834255  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:52.992145  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:52.992607  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:53.336621  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:53.494731  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:53.495595  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:53.644965  517398 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.651033239s)
	I1115 09:33:53.645039  517398 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.104004982s)
	I1115 09:33:53.648012  517398 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:33:53.650829  517398 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 09:33:53.653683  517398 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 09:33:53.653706  517398 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 09:33:53.667283  517398 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 09:33:53.667352  517398 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 09:33:53.680407  517398 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:33:53.680431  517398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 09:33:53.694718  517398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:33:53.834566  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:53.993043  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:53.993923  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:54.195140  517398 addons.go:480] Verifying addon gcp-auth=true in "addons-612806"
	I1115 09:33:54.198183  517398 out.go:179] * Verifying gcp-auth addon...
	I1115 09:33:54.201864  517398 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 09:33:54.204353  517398 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 09:33:54.204372  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:54.334962  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:33:54.443853  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:54.493029  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:54.493176  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:54.707206  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:54.834175  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:54.991856  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:54.993210  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:55.205355  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:55.334778  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:55.492751  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:55.493716  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:55.705661  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:55.834497  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:55.992505  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:55.992652  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:56.204862  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:56.333729  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:33:56.444002  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:56.492862  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:56.493348  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:56.707483  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:56.834618  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:56.991521  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:56.992536  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:57.206090  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:57.334017  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:57.492940  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:57.493548  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:57.708944  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:57.834676  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:57.992483  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:57.992878  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:58.205050  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:58.333929  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:58.492374  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:58.492659  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:58.705477  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:58.834388  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:33:58.943353  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:33:58.992870  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:58.992970  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:59.204659  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:59.334580  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:59.492650  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:33:59.492697  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:59.708687  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:33:59.834473  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:33:59.992449  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:33:59.992774  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:00.211099  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:00.336496  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:00.494499  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:00.495283  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:00.705181  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:00.834310  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:00.991917  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:00.992381  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:01.205561  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:01.334434  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:01.443227  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:01.493425  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:01.495552  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:01.706107  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:01.834231  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:01.992803  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:01.993248  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:02.205291  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:02.334528  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:02.492566  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:02.494266  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:02.708000  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:02.834055  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:02.992266  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:02.992424  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:03.205477  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:03.334964  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:03.443801  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:03.492265  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:03.493485  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:03.706056  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:03.833951  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:03.992390  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:03.992662  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:04.204490  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:04.334581  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:04.493309  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:04.493749  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:04.708209  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:04.834193  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:04.992524  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:04.992809  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:05.204849  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:05.333798  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:05.493297  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:05.494064  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:05.708202  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:05.834460  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:05.943717  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:05.993433  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:05.994210  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:06.205314  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:06.334413  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:06.493087  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:06.493166  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:06.707278  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:06.833959  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:06.991862  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:06.992807  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:07.204778  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:07.333823  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:07.492787  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:07.492997  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:07.710870  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:07.833780  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:07.943770  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:07.991871  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:07.992642  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:08.204662  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:08.333931  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:08.492811  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:08.493643  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:08.708256  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:08.834476  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:08.992617  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:08.992957  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:09.205733  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:09.334501  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:09.493477  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:09.493900  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:09.708422  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:09.834777  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:09.991373  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:09.992409  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:10.205422  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:10.334290  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:10.442932  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:10.493019  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:10.493098  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:10.707263  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:10.834379  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:10.993216  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:10.993348  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:11.204887  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:11.333665  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:11.493577  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:11.494272  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:11.705225  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:11.834175  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:11.992717  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:11.993264  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:12.205197  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:12.334193  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:12.443067  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:12.492711  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:12.493858  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:12.705081  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:12.834480  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:12.991791  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:12.991978  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:13.204828  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:13.333578  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:13.492541  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:13.493176  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:13.705807  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:13.835459  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:13.992903  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:13.992983  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:14.205310  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:14.334271  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:14.443901  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:14.492120  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:14.493108  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:14.709884  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:14.833520  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:14.992581  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:14.992906  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:15.204801  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:15.333529  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:15.492026  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:15.493646  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:15.708159  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:15.834196  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:15.991796  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:15.992906  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:16.205374  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:16.334339  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:16.492935  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:16.493243  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:16.705486  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:16.834416  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:16.943492  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:16.992623  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:16.992754  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:17.205822  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:17.333989  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:17.492805  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:17.493381  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:17.707199  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:17.834802  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:17.992040  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:17.992570  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:18.204952  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:18.333741  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:18.493696  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:18.494313  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:18.708347  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:18.834422  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:18.992456  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:18.992655  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:19.205464  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:19.334585  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:19.443541  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:19.491649  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:19.493495  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:19.705627  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:19.834721  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:19.992687  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:19.993038  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:20.204922  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:20.333768  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:20.492438  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:20.493377  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:20.705335  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:20.834701  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:20.991543  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:20.992971  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:21.205019  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:21.333794  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:21.493671  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:21.493843  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:21.704940  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:21.833665  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:21.943461  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:21.992096  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:21.992353  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:22.205410  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:22.334188  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:22.493483  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:22.493571  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:22.708387  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:22.834123  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:22.993099  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:22.993226  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:23.205128  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:23.334011  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:23.492916  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:23.494083  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:23.708465  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:23.834260  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:23.943998  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:23.992418  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:23.992677  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:24.204741  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:24.334516  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:24.493082  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:24.493268  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:24.706892  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:24.833515  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:24.992668  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:24.993020  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:25.204674  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:25.334405  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:25.493079  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:25.494014  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:25.706972  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:25.833865  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:25.991329  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:25.992439  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:26.205162  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:26.334223  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1115 09:34:26.442885  517398 node_ready.go:57] node "addons-612806" has "Ready":"False" status (will retry)
	I1115 09:34:26.492787  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:26.493036  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:26.705333  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:26.834359  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:26.992661  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:26.992827  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:27.205424  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:27.334166  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:27.492584  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:27.492883  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:27.741152  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:27.848476  517398 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:34:27.848501  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:27.982881  517398 node_ready.go:49] node "addons-612806" is "Ready"
	I1115 09:34:27.982911  517398 node_ready.go:38] duration metric: took 42.042689843s for node "addons-612806" to be "Ready" ...
	I1115 09:34:27.982926  517398 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:34:27.982986  517398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:34:28.007354  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:28.012492  517398 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:34:28.012520  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:28.025169  517398 api_server.go:72] duration metric: took 43.743579173s to wait for apiserver process to appear ...
	I1115 09:34:28.025244  517398 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:34:28.025280  517398 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:34:28.062050  517398 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:34:28.067986  517398 api_server.go:141] control plane version: v1.34.1
	I1115 09:34:28.068017  517398 api_server.go:131] duration metric: took 42.751292ms to wait for apiserver health ...
	I1115 09:34:28.068027  517398 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:34:28.125681  517398 system_pods.go:59] 19 kube-system pods found
	I1115 09:34:28.125731  517398 system_pods.go:61] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:34:28.125751  517398 system_pods.go:61] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:28.125769  517398 system_pods.go:61] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:28.125775  517398 system_pods.go:61] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending
	I1115 09:34:28.125781  517398 system_pods.go:61] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:28.125798  517398 system_pods.go:61] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:28.125803  517398 system_pods.go:61] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:28.125808  517398 system_pods.go:61] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:28.125822  517398 system_pods.go:61] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:28.125833  517398 system_pods.go:61] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:28.125842  517398 system_pods.go:61] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:28.125851  517398 system_pods.go:61] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:28.125867  517398 system_pods.go:61] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending
	I1115 09:34:28.125881  517398 system_pods.go:61] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:28.125897  517398 system_pods.go:61] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:28.125903  517398 system_pods.go:61] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending
	I1115 09:34:28.125913  517398 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending
	I1115 09:34:28.125918  517398 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending
	I1115 09:34:28.125942  517398 system_pods.go:61] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:34:28.125952  517398 system_pods.go:74] duration metric: took 57.918752ms to wait for pod list to return data ...
	I1115 09:34:28.125967  517398 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:34:28.196634  517398 default_sa.go:45] found service account: "default"
	I1115 09:34:28.196669  517398 default_sa.go:55] duration metric: took 70.691428ms for default service account to be created ...
	I1115 09:34:28.196691  517398 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:34:28.218861  517398 system_pods.go:86] 19 kube-system pods found
	I1115 09:34:28.218905  517398 system_pods.go:89] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:34:28.218915  517398 system_pods.go:89] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:28.218924  517398 system_pods.go:89] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:28.218930  517398 system_pods.go:89] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending
	I1115 09:34:28.218937  517398 system_pods.go:89] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:28.218947  517398 system_pods.go:89] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:28.218952  517398 system_pods.go:89] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:28.218962  517398 system_pods.go:89] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:28.218970  517398 system_pods.go:89] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:28.218980  517398 system_pods.go:89] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:28.218985  517398 system_pods.go:89] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:28.218991  517398 system_pods.go:89] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:28.219002  517398 system_pods.go:89] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:34:28.219009  517398 system_pods.go:89] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:28.219022  517398 system_pods.go:89] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:28.219026  517398 system_pods.go:89] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending
	I1115 09:34:28.219031  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending
	I1115 09:34:28.219035  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending
	I1115 09:34:28.219040  517398 system_pods.go:89] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:34:28.219067  517398 retry.go:31] will retry after 211.513842ms: missing components: kube-dns
	I1115 09:34:28.219217  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:28.340710  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:28.440696  517398 system_pods.go:86] 19 kube-system pods found
	I1115 09:34:28.440736  517398 system_pods.go:89] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:34:28.440745  517398 system_pods.go:89] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:28.440752  517398 system_pods.go:89] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:28.440759  517398 system_pods.go:89] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:34:28.440764  517398 system_pods.go:89] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:28.440769  517398 system_pods.go:89] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:28.440773  517398 system_pods.go:89] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:28.440789  517398 system_pods.go:89] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:28.440796  517398 system_pods.go:89] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:28.440809  517398 system_pods.go:89] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:28.440815  517398 system_pods.go:89] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:28.440827  517398 system_pods.go:89] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:28.440835  517398 system_pods.go:89] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:34:28.440849  517398 system_pods.go:89] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:28.440856  517398 system_pods.go:89] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:28.440862  517398 system_pods.go:89] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:34:28.440872  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:28.440882  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:28.440889  517398 system_pods.go:89] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:34:28.440910  517398 retry.go:31] will retry after 318.709172ms: missing components: kube-dns
	I1115 09:34:28.494414  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:28.494487  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:28.710106  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:28.812868  517398 system_pods.go:86] 19 kube-system pods found
	I1115 09:34:28.812907  517398 system_pods.go:89] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:34:28.812916  517398 system_pods.go:89] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:28.812923  517398 system_pods.go:89] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:28.812933  517398 system_pods.go:89] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:34:28.812937  517398 system_pods.go:89] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:28.812943  517398 system_pods.go:89] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:28.812947  517398 system_pods.go:89] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:28.812953  517398 system_pods.go:89] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:28.812958  517398 system_pods.go:89] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:28.812962  517398 system_pods.go:89] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:28.812975  517398 system_pods.go:89] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:28.812981  517398 system_pods.go:89] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:28.812989  517398 system_pods.go:89] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:34:28.813000  517398 system_pods.go:89] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:28.813007  517398 system_pods.go:89] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:28.813016  517398 system_pods.go:89] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:34:28.813022  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:28.813033  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:28.813039  517398 system_pods.go:89] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:34:28.813053  517398 retry.go:31] will retry after 390.480865ms: missing components: kube-dns
	I1115 09:34:28.840895  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:28.994505  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:28.994817  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:29.205701  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:29.211005  517398 system_pods.go:86] 19 kube-system pods found
	I1115 09:34:29.211038  517398 system_pods.go:89] "coredns-66bc5c9577-msbpd" [4a7c9ce1-c290-41a1-8abf-29f3f2834e1b] Running
	I1115 09:34:29.211049  517398 system_pods.go:89] "csi-hostpath-attacher-0" [65fffc48-431f-454c-8e09-bb505d95a76e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:34:29.211056  517398 system_pods.go:89] "csi-hostpath-resizer-0" [4eb34891-ac5a-4cf9-9909-f003bc3a9be4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:34:29.211065  517398 system_pods.go:89] "csi-hostpathplugin-bcrc9" [f96f8e5c-104f-4ba8-919f-e23770fc61cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:34:29.211069  517398 system_pods.go:89] "etcd-addons-612806" [a245e70a-748c-4975-aa21-593ce6cb8a75] Running
	I1115 09:34:29.211074  517398 system_pods.go:89] "kindnet-gpq7q" [d817cb7e-116b-463d-975b-1d35cba3b4f1] Running
	I1115 09:34:29.211079  517398 system_pods.go:89] "kube-apiserver-addons-612806" [33c8c3ef-1048-45e7-9017-d732822b8faa] Running
	I1115 09:34:29.211083  517398 system_pods.go:89] "kube-controller-manager-addons-612806" [d082eb9a-b635-47b9-bc48-89c46e04f2aa] Running
	I1115 09:34:29.211099  517398 system_pods.go:89] "kube-ingress-dns-minikube" [d85ebd15-52c4-44a7-88b8-9f18ad0e43e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:34:29.211107  517398 system_pods.go:89] "kube-proxy-7s8kz" [c332942a-ed8f-4afa-8e30-9ac6eb930177] Running
	I1115 09:34:29.211112  517398 system_pods.go:89] "kube-scheduler-addons-612806" [99a91a6e-3228-41a9-bfb2-194423fbdcc1] Running
	I1115 09:34:29.211118  517398 system_pods.go:89] "metrics-server-85b7d694d7-4pwlq" [31515aeb-d50e-40e6-a19c-ab4c52ded5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:34:29.211132  517398 system_pods.go:89] "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:34:29.211139  517398 system_pods.go:89] "registry-6b586f9694-79xjl" [d57d7750-f6c3-478d-a789-8ca415309309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:34:29.211145  517398 system_pods.go:89] "registry-creds-764b6fb674-kpz66" [49c3bf34-3e32-4a3e-b71c-db316210e43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:34:29.211155  517398 system_pods.go:89] "registry-proxy-fbtjr" [76d3a036-7195-4589-8ca1-f2405ffcc28a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:34:29.211162  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7w2kr" [dad2b6aa-8faf-4fb3-a66e-5d44dbe0f395] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:29.211169  517398 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9nz8" [b477d7d9-66c5-4889-a21d-be04451e88bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:34:29.211175  517398 system_pods.go:89] "storage-provisioner" [a295d4eb-06e6-49db-8f53-23748c9e7755] Running
	I1115 09:34:29.211185  517398 system_pods.go:126] duration metric: took 1.014487926s to wait for k8s-apps to be running ...
	I1115 09:34:29.211197  517398 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:34:29.211256  517398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:34:29.228404  517398 system_svc.go:56] duration metric: took 17.197087ms WaitForService to wait for kubelet
	I1115 09:34:29.228435  517398 kubeadm.go:587] duration metric: took 44.946850914s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:34:29.228454  517398 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:34:29.231256  517398 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 09:34:29.231287  517398 node_conditions.go:123] node cpu capacity is 2
	I1115 09:34:29.231301  517398 node_conditions.go:105] duration metric: took 2.842074ms to run NodePressure ...
	I1115 09:34:29.231315  517398 start.go:242] waiting for startup goroutines ...
	I1115 09:34:29.334837  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:29.493638  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:29.494068  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:29.705102  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:29.835680  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:29.992166  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:29.992693  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:30.204638  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:30.334090  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:30.493485  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:30.493827  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:30.704595  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:30.833664  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:30.991885  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:30.993310  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:31.205298  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:31.334366  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:31.493401  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:31.494136  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:31.705509  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:31.835174  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:31.992818  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:31.993034  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:32.204810  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:32.334277  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:32.495237  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:32.495449  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:32.706561  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:32.834630  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:32.994250  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:32.994759  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:33.211390  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:33.335373  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:33.493633  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:33.494868  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:33.705120  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:33.834692  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:33.993956  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:33.994495  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:34.205708  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:34.333813  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:34.492894  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:34.493478  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:34.707375  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:34.834924  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:34.994033  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:34.994722  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:35.206198  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:35.334724  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:35.492040  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:35.494209  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:35.706580  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:35.834812  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:35.994830  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:35.995505  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:36.205746  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:36.333733  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:36.495572  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:36.495750  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:36.706705  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:36.834144  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:37.004181  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:37.004362  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:37.205683  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:37.334109  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:37.495909  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:37.496487  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:37.706314  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:37.835318  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:37.994545  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:37.994784  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:38.204727  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:38.333993  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:38.494047  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:38.494720  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:38.705969  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:38.834853  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:38.994002  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:38.995289  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:39.205278  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:39.335860  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:39.495803  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:39.496278  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:39.707421  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:39.835277  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:39.994297  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:39.994769  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:40.205526  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:40.335575  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:40.496315  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:40.496691  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:40.709673  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:40.837157  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:40.993873  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:40.994300  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:41.205547  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:41.335322  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:41.493234  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:41.494203  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:41.709985  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:41.834420  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:41.992952  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:41.993490  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:42.206377  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:42.335162  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:42.493636  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:42.494032  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:42.705129  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:42.834800  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:42.994493  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:42.994939  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:43.206097  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:43.335717  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:43.492648  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:43.493048  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:43.706756  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:43.835374  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:44.032581  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:44.032760  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:44.205464  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:44.336583  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:44.494980  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:44.506888  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:44.712309  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:44.834983  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:44.993795  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:44.993941  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:45.211871  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:45.336420  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:45.498708  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:45.499993  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:45.711945  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:45.834948  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:45.995891  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:45.996550  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:46.206906  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:46.334976  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:46.498524  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:46.501228  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:46.707753  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:46.833958  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:47.000178  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:47.001463  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:47.205841  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:47.334464  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:47.491410  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:47.493165  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:47.705317  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:47.834514  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:47.993924  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:47.994507  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:48.205401  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:48.334649  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:48.495186  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:48.495362  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:48.706572  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:48.834149  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:48.999681  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:49.000506  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:49.205555  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:49.335798  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:49.492784  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:49.492891  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:49.705631  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:49.834494  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:49.992201  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:49.992309  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:50.205911  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:50.334191  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:50.494072  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:50.494438  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:50.706026  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:50.833941  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:50.998083  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:51.004360  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:51.205636  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:51.335153  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:51.493724  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:51.493832  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:51.704952  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:51.833980  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:51.999974  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:52.004000  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:52.205439  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:52.335054  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:52.492540  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:52.492873  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:52.706481  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:52.835177  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:52.992321  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:52.993006  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:53.204898  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:53.334488  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:53.493084  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:53.493445  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:53.705573  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:53.834083  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:53.993143  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:53.994219  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:54.206766  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:54.334569  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:54.493744  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:54.493874  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:54.705483  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:54.834668  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:54.999410  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:54.999822  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:55.205256  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:55.335107  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:55.494231  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:55.494730  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:55.706263  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:55.835519  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:55.994191  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:55.996531  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:56.205969  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:56.334633  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:56.492521  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:56.492914  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:56.709116  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:56.834419  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:56.993215  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:56.993312  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:57.205541  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:57.334488  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:57.491827  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:57.492666  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:57.706546  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:57.835192  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:57.994596  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:57.994810  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:58.205751  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:58.334769  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:58.493699  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:58.494195  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:58.707422  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:58.838141  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:58.993033  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:58.993288  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:59.205535  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:59.335146  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:59.492838  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:59.493183  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:34:59.709451  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:34:59.834806  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:34:59.994947  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:34:59.995376  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:00.210669  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:00.337631  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:00.497901  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:00.499383  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:00.711030  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:00.834338  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:00.993299  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:00.993547  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:01.209263  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:01.336349  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:01.498083  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:01.498911  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:01.705664  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:01.835303  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:01.992921  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:01.994275  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:02.206446  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:02.335443  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:02.493336  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:02.494106  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:02.706836  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:02.834381  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:02.992606  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:02.993969  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:03.205241  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:03.334394  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:03.506859  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:03.507212  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:03.705539  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:03.834523  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:03.991955  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:03.993621  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:04.206617  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:04.334849  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:04.491795  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:04.493486  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:04.706348  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:04.834476  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:04.991601  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:04.992222  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:05.205864  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:05.334280  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:05.492767  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:05.492951  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:05.707348  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:05.834339  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:05.993183  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:05.993345  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:06.205472  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:06.335188  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:06.494301  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:06.494732  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:06.708316  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:06.835494  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:06.993201  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:06.993911  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:07.205736  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:07.334672  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:07.491656  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:07.493022  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:07.706639  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:07.836209  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:07.993912  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:35:07.995730  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:08.205658  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:08.334044  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:08.492840  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:08.494219  517398 kapi.go:107] duration metric: took 1m18.004733862s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 09:35:08.706420  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:08.836655  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:08.992554  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:09.205403  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:09.334593  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:09.491666  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:09.705848  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:09.834693  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:09.992931  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:10.205741  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:10.335169  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:10.496280  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:10.710156  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:10.834904  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:10.992205  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:11.204992  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:11.334022  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:11.491915  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:11.706354  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:11.835190  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:11.992474  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:12.206055  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:12.334674  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:12.493749  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:12.707827  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:12.834897  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:12.993623  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:13.207135  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:13.344291  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:13.497010  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:13.707636  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:13.834905  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:13.993533  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:14.205932  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:14.335175  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:14.492119  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:14.709297  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:35:14.836791  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:14.992250  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:15.206780  517398 kapi.go:107] duration metric: took 1m21.00491442s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 09:35:15.211166  517398 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-612806 cluster.
	I1115 09:35:15.214153  517398 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 09:35:15.217123  517398 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 09:35:15.335063  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:15.492757  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:15.835328  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:15.992755  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:16.335917  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:16.492964  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:16.834416  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:16.991287  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:17.334335  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:17.492067  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:17.834662  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:17.991771  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:18.334198  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:18.492198  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:18.834876  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:18.992878  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:19.334283  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:19.492375  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:19.835022  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:19.992531  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:20.333937  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:20.492510  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:20.835644  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:20.993249  517398 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:35:21.338622  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:21.491968  517398 kapi.go:107] duration metric: took 1m31.003357411s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 09:35:21.835640  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:22.334687  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:22.834877  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:23.334717  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:23.834407  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:24.334285  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:24.836646  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:25.335550  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:25.834248  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:26.336925  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:26.835593  517398 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:35:27.334464  517398 kapi.go:107] duration metric: took 1m36.503778421s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 09:35:27.337668  517398 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, inspektor-gadget, amd-gpu-device-plugin, registry-creds, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1115 09:35:27.340743  517398 addons.go:515] duration metric: took 1m43.058771228s for enable addons: enabled=[nvidia-device-plugin storage-provisioner inspektor-gadget amd-gpu-device-plugin registry-creds ingress-dns cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1115 09:35:27.340801  517398 start.go:247] waiting for cluster config update ...
	I1115 09:35:27.340824  517398 start.go:256] writing updated cluster config ...
	I1115 09:35:27.341135  517398 ssh_runner.go:195] Run: rm -f paused
	I1115 09:35:27.346700  517398 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:35:27.350686  517398 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-msbpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.356284  517398 pod_ready.go:94] pod "coredns-66bc5c9577-msbpd" is "Ready"
	I1115 09:35:27.356309  517398 pod_ready.go:86] duration metric: took 5.593016ms for pod "coredns-66bc5c9577-msbpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.359516  517398 pod_ready.go:83] waiting for pod "etcd-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.366188  517398 pod_ready.go:94] pod "etcd-addons-612806" is "Ready"
	I1115 09:35:27.366279  517398 pod_ready.go:86] duration metric: took 6.736375ms for pod "etcd-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.371191  517398 pod_ready.go:83] waiting for pod "kube-apiserver-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.376857  517398 pod_ready.go:94] pod "kube-apiserver-addons-612806" is "Ready"
	I1115 09:35:27.376881  517398 pod_ready.go:86] duration metric: took 5.656186ms for pod "kube-apiserver-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.380155  517398 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.753188  517398 pod_ready.go:94] pod "kube-controller-manager-addons-612806" is "Ready"
	I1115 09:35:27.753215  517398 pod_ready.go:86] duration metric: took 372.98681ms for pod "kube-controller-manager-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:27.951172  517398 pod_ready.go:83] waiting for pod "kube-proxy-7s8kz" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:28.360801  517398 pod_ready.go:94] pod "kube-proxy-7s8kz" is "Ready"
	I1115 09:35:28.360880  517398 pod_ready.go:86] duration metric: took 409.630724ms for pod "kube-proxy-7s8kz" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:28.551574  517398 pod_ready.go:83] waiting for pod "kube-scheduler-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:28.950277  517398 pod_ready.go:94] pod "kube-scheduler-addons-612806" is "Ready"
	I1115 09:35:28.950320  517398 pod_ready.go:86] duration metric: took 398.717422ms for pod "kube-scheduler-addons-612806" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:35:28.950338  517398 pod_ready.go:40] duration metric: took 1.603607974s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:35:29.020410  517398 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 09:35:29.023514  517398 out.go:179] * Done! kubectl is now configured to use "addons-612806" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:35:57 addons-612806 crio[827]: time="2025-11-15T09:35:57.619771414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:35:57 addons-612806 crio[827]: time="2025-11-15T09:35:57.620347536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:35:57 addons-612806 crio[827]: time="2025-11-15T09:35:57.637546241Z" level=info msg="Created container ff43f186940695b2951f2a61b2d1af9ea7bf29849f4ef0ab53ee8e78dd37a86b: default/test-local-path/busybox" id=7f3ada35-e07b-41c6-b38b-48276867af5a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:35:57 addons-612806 crio[827]: time="2025-11-15T09:35:57.640723477Z" level=info msg="Starting container: ff43f186940695b2951f2a61b2d1af9ea7bf29849f4ef0ab53ee8e78dd37a86b" id=b74baa97-b90e-496a-baee-3b88baaceae1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:35:57 addons-612806 crio[827]: time="2025-11-15T09:35:57.642567927Z" level=info msg="Started container" PID=5319 containerID=ff43f186940695b2951f2a61b2d1af9ea7bf29849f4ef0ab53ee8e78dd37a86b description=default/test-local-path/busybox id=b74baa97-b90e-496a-baee-3b88baaceae1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e7f853dea67b81c7c16e67e9dddbfcabc07caf659e26f789e5118749ce5831a
	Nov 15 09:35:59 addons-612806 crio[827]: time="2025-11-15T09:35:59.377951195Z" level=info msg="Stopping pod sandbox: 1e7f853dea67b81c7c16e67e9dddbfcabc07caf659e26f789e5118749ce5831a" id=ffb355a5-7c32-476d-bdf4-353f5614765f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:35:59 addons-612806 crio[827]: time="2025-11-15T09:35:59.37831477Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:1e7f853dea67b81c7c16e67e9dddbfcabc07caf659e26f789e5118749ce5831a UID:2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f NetNS:/var/run/netns/4da04001-610b-4ff8-a428-64ec8cc38787 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b24be0}] Aliases:map[]}"
	Nov 15 09:35:59 addons-612806 crio[827]: time="2025-11-15T09:35:59.378497214Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 15 09:35:59 addons-612806 crio[827]: time="2025-11-15T09:35:59.406641566Z" level=info msg="Stopped pod sandbox: 1e7f853dea67b81c7c16e67e9dddbfcabc07caf659e26f789e5118749ce5831a" id=ffb355a5-7c32-476d-bdf4-353f5614765f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.074758416Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953/POD" id=d9c44456-a14a-4595-86d7-15a8176e77e4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.074829355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.098497859Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953 Namespace:local-path-storage ID:77f456658117942465e1055e6adeabc66fae9fa0c73530c461db5a8d0bd7167e UID:4daf721c-0437-4b3c-b0d5-1ffc49b92564 NetNS:/var/run/netns/399b30dc-bf21-4c8f-ab82-4feed7a79351 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049f338}] Aliases:map[]}"
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.09854589Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953 to CNI network \"kindnet\" (type=ptp)"
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.122889854Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953 Namespace:local-path-storage ID:77f456658117942465e1055e6adeabc66fae9fa0c73530c461db5a8d0bd7167e UID:4daf721c-0437-4b3c-b0d5-1ffc49b92564 NetNS:/var/run/netns/399b30dc-bf21-4c8f-ab82-4feed7a79351 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049f338}] Aliases:map[]}"
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.123037935Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953 for CNI network kindnet (type=ptp)"
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.126246924Z" level=info msg="Ran pod sandbox 77f456658117942465e1055e6adeabc66fae9fa0c73530c461db5a8d0bd7167e with infra container: local-path-storage/helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953/POD" id=d9c44456-a14a-4595-86d7-15a8176e77e4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.129980762Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=6dca0a3d-111b-4605-8284-d58af84b8414 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.13702652Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=70739ad7-667f-4354-a6dc-26693bd1c09c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.145090988Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953/helper-pod" id=78216c0a-f025-4276-afca-ea5b105c6d9f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.145273464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.157064747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.157798264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.198022497Z" level=info msg="Created container 80ac2473949aa5417635227c85ec6bbfdb6aa86528e7e4035232477d3a357701: local-path-storage/helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953/helper-pod" id=78216c0a-f025-4276-afca-ea5b105c6d9f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.201942308Z" level=info msg="Starting container: 80ac2473949aa5417635227c85ec6bbfdb6aa86528e7e4035232477d3a357701" id=9ed56071-fa9e-46ec-876d-1013e272a748 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:36:01 addons-612806 crio[827]: time="2025-11-15T09:36:01.204226868Z" level=info msg="Started container" PID=5457 containerID=80ac2473949aa5417635227c85ec6bbfdb6aa86528e7e4035232477d3a357701 description=local-path-storage/helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953/helper-pod id=9ed56071-fa9e-46ec-876d-1013e272a748 name=/runtime.v1.RuntimeService/StartContainer sandboxID=77f456658117942465e1055e6adeabc66fae9fa0c73530c461db5a8d0bd7167e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	80ac2473949aa       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   77f4566581179       helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953   local-path-storage
	ff43f18694069       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   1e7f853dea67b       test-local-path                                              default
	240514d4a05c3       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   e7d5b1c886cd2       helper-pod-create-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953   local-path-storage
	d567eb19bf636       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   37093d9b72a30       busybox                                                      default
	f5d0536bcdade       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          35 seconds ago       Running             csi-snapshotter                          0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                                     kube-system
	2760bb90e56da       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          36 seconds ago       Running             csi-provisioner                          0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                                     kube-system
	33e20be214f16       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            38 seconds ago       Running             liveness-probe                           0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                                     kube-system
	746d8de6dedd7       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           39 seconds ago       Running             hostpath                                 0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                                     kube-system
	7bea5772f4a36       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                40 seconds ago       Running             node-driver-registrar                    0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                                     kube-system
	4a102da0b6e13       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             41 seconds ago       Running             controller                               0                   0b71193a9d24f       ingress-nginx-controller-6c8bf45fb-lzt9c                     ingress-nginx
	f3a3eb6514f75       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 47 seconds ago       Running             gcp-auth                                 0                   5734b0f3c3bc2       gcp-auth-78565c9fb4-tts8c                                    gcp-auth
	e90cb6f34fb09       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            50 seconds ago       Running             gadget                                   0                   269266278ffad       gadget-7hxzc                                                 gadget
	b5e54ce202660       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              54 seconds ago       Running             registry-proxy                           0                   388e1150d04d1       registry-proxy-fbtjr                                         kube-system
	305dbdeebf497       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             54 seconds ago       Exited              patch                                    2                   725870c33ba58       gcp-auth-certs-patch-9m29s                                   gcp-auth
	2aa4139d8dd55       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   57 seconds ago       Running             csi-external-health-monitor-controller   0                   e7a4bb276a7bc       csi-hostpathplugin-bcrc9                                     kube-system
	12a854b5199da       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      59 seconds ago       Running             volume-snapshot-controller               0                   9bab4960d5518       snapshot-controller-7d9fbc56b8-7w2kr                         kube-system
	bf2b5a6db5940       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      59 seconds ago       Running             volume-snapshot-controller               0                   e57a7eb1c5871       snapshot-controller-7d9fbc56b8-d9nz8                         kube-system
	075d53e5906ff       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   ee78fd2107c21       nvidia-device-plugin-daemonset-b6hwh                         kube-system
	fb3b6d33a4c26       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              patch                                    0                   54fb97f4d207a       ingress-nginx-admission-patch-4m8hk                          ingress-nginx
	c48d417c7a19b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   d87bcaf53f7d7       ingress-nginx-admission-create-8zwkg                         ingress-nginx
	7cc41ee7d29ef       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   cfafc98934096       yakd-dashboard-5ff678cb9-b4gnf                               yakd-dashboard
	b85dde7237c9e       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   7cf2e7d72de8b       csi-hostpath-attacher-0                                      kube-system
	e99ded4840b6b       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   40eb08b0121b1       local-path-provisioner-648f6765c9-qfb28                      local-path-storage
	0f3e60922b612       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   d10ed202800a5       csi-hostpath-resizer-0                                       kube-system
	d8ad2af91929f       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   966dce1df0a38       registry-6b586f9694-79xjl                                    kube-system
	44f41ef9e3625       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   34c34db92d77e       metrics-server-85b7d694d7-4pwlq                              kube-system
	393d306bdd197       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   877dddd6970de       cloud-spanner-emulator-6f9fcf858b-lxjc6                      default
	0c821a004b527       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   2fdf5b005182c       kube-ingress-dns-minikube                                    kube-system
	a25068fa2e690       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   f31c0ff32faf4       coredns-66bc5c9577-msbpd                                     kube-system
	2a3c8692022a2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   09fd40706fba5       storage-provisioner                                          kube-system
	38ee32437965d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   33ad05db95f06       kube-proxy-7s8kz                                             kube-system
	19fe4bfa7943a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   f0c2c07776e94       kindnet-gpq7q                                                kube-system
	b546f11eac5f3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   3bd6199efe93d       etcd-addons-612806                                           kube-system
	2d41c4d4be99c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   e7c818d44ddd3       kube-apiserver-addons-612806                                 kube-system
	a834825c233e4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   b61b38542c8f6       kube-scheduler-addons-612806                                 kube-system
	dc26ca1097619       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   3ae9b4c208cca       kube-controller-manager-addons-612806                        kube-system
	
	
	==> coredns [a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7] <==
	[INFO] 10.244.0.17:50602 - 11237 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002269758s
	[INFO] 10.244.0.17:50602 - 57848 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000127086s
	[INFO] 10.244.0.17:50602 - 18849 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00015312s
	[INFO] 10.244.0.17:51138 - 37665 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000144816s
	[INFO] 10.244.0.17:51138 - 37460 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000371271s
	[INFO] 10.244.0.17:46047 - 1362 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110913s
	[INFO] 10.244.0.17:46047 - 1140 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151536s
	[INFO] 10.244.0.17:47472 - 40876 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114861s
	[INFO] 10.244.0.17:47472 - 40662 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000163974s
	[INFO] 10.244.0.17:42011 - 64665 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001270748s
	[INFO] 10.244.0.17:42011 - 64452 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001321808s
	[INFO] 10.244.0.17:56255 - 43742 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000137292s
	[INFO] 10.244.0.17:56255 - 43306 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000236219s
	[INFO] 10.244.0.20:52067 - 21128 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189467s
	[INFO] 10.244.0.20:46385 - 55053 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000134108s
	[INFO] 10.244.0.20:52552 - 48966 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130137s
	[INFO] 10.244.0.20:56220 - 20338 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000078627s
	[INFO] 10.244.0.20:34252 - 52397 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100297s
	[INFO] 10.244.0.20:39201 - 59513 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095193s
	[INFO] 10.244.0.20:49873 - 50495 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002106358s
	[INFO] 10.244.0.20:53186 - 5847 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001551716s
	[INFO] 10.244.0.20:44440 - 11221 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004413564s
	[INFO] 10.244.0.20:59553 - 12990 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.005310153s
	[INFO] 10.244.0.23:58542 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158846s
	[INFO] 10.244.0.23:42502 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164147s
	
	
	==> describe nodes <==
	Name:               addons-612806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-612806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=addons-612806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_33_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-612806
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-612806"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:33:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-612806
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:36:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:36:01 +0000   Sat, 15 Nov 2025 09:33:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:36:01 +0000   Sat, 15 Nov 2025 09:33:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:36:01 +0000   Sat, 15 Nov 2025 09:33:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:36:01 +0000   Sat, 15 Nov 2025 09:34:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-612806
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c94e744c-8f53-4209-88d4-00cf31bc37c0
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-6f9fcf858b-lxjc6     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  gadget                      gadget-7hxzc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  gcp-auth                    gcp-auth-78565c9fb4-tts8c                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-lzt9c    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m12s
	  kube-system                 coredns-66bc5c9577-msbpd                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 csi-hostpathplugin-bcrc9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 etcd-addons-612806                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-gpq7q                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-addons-612806                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-addons-612806       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-proxy-7s8kz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-addons-612806                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 metrics-server-85b7d694d7-4pwlq             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m13s
	  kube-system                 nvidia-device-plugin-daemonset-b6hwh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 registry-6b586f9694-79xjl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 registry-creds-764b6fb674-kpz66             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 registry-proxy-fbtjr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 snapshot-controller-7d9fbc56b8-7w2kr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 snapshot-controller-7d9fbc56b8-d9nz8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  local-path-storage          local-path-provisioner-648f6765c9-qfb28     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-b4gnf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m15s  kube-proxy       
	  Normal   Starting                 2m24s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m24s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s  kubelet          Node addons-612806 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m24s  kubelet          Node addons-612806 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s  kubelet          Node addons-612806 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m19s  node-controller  Node addons-612806 event: Registered Node addons-612806 in Controller
	  Normal   NodeReady                95s    kubelet          Node addons-612806 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 09:10] overlayfs: idmapped layers are currently not supported
	[Nov15 09:12] overlayfs: idmapped layers are currently not supported
	[Nov15 09:14] overlayfs: idmapped layers are currently not supported
	[ +52.677127] overlayfs: idmapped layers are currently not supported
	[Nov15 09:15] overlayfs: idmapped layers are currently not supported
	[ +18.264224] overlayfs: idmapped layers are currently not supported
	[Nov15 09:16] overlayfs: idmapped layers are currently not supported
	[Nov15 09:17] overlayfs: idmapped layers are currently not supported
	[Nov15 09:19] overlayfs: idmapped layers are currently not supported
	[ +25.565300] overlayfs: idmapped layers are currently not supported
	[Nov15 09:20] overlayfs: idmapped layers are currently not supported
	[Nov15 09:21] overlayfs: idmapped layers are currently not supported
	[Nov15 09:22] overlayfs: idmapped layers are currently not supported
	[ +46.757118] overlayfs: idmapped layers are currently not supported
	[Nov15 09:23] overlayfs: idmapped layers are currently not supported
	[ +24.765155] overlayfs: idmapped layers are currently not supported
	[Nov15 09:24] overlayfs: idmapped layers are currently not supported
	[Nov15 09:25] overlayfs: idmapped layers are currently not supported
	[Nov15 09:26] overlayfs: idmapped layers are currently not supported
	[Nov15 09:27] overlayfs: idmapped layers are currently not supported
	[ +25.160027] overlayfs: idmapped layers are currently not supported
	[Nov15 09:29] overlayfs: idmapped layers are currently not supported
	[ +40.626123] overlayfs: idmapped layers are currently not supported
	[Nov15 09:32] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 09:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314] <==
	{"level":"warn","ts":"2025-11-15T09:33:34.730100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.758221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.772693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.802485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.826098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.860883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.878414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.891155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.911978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.926609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.946107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.958328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:34.973001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.002643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.016406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.058804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.078231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.090854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:35.194642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:51.263101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:33:51.272743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:34:13.384652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:34:13.410136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:34:13.438122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:34:13.452697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41638","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f3a3eb6514f75e5a41f1d79a48e883d63397233ca2e4078d9d8eaffafca420f4] <==
	2025/11/15 09:35:14 GCP Auth Webhook started!
	2025/11/15 09:35:29 Ready to marshal response ...
	2025/11/15 09:35:29 Ready to write response ...
	2025/11/15 09:35:29 Ready to marshal response ...
	2025/11/15 09:35:29 Ready to write response ...
	2025/11/15 09:35:29 Ready to marshal response ...
	2025/11/15 09:35:29 Ready to write response ...
	2025/11/15 09:35:49 Ready to marshal response ...
	2025/11/15 09:35:49 Ready to write response ...
	2025/11/15 09:35:52 Ready to marshal response ...
	2025/11/15 09:35:52 Ready to write response ...
	2025/11/15 09:35:52 Ready to marshal response ...
	2025/11/15 09:35:52 Ready to write response ...
	2025/11/15 09:36:00 Ready to marshal response ...
	2025/11/15 09:36:00 Ready to write response ...
	
	
	==> kernel <==
	 09:36:02 up  4:18,  0 user,  load average: 1.94, 2.42, 2.78
	Linux addons-612806 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d] <==
	E1115 09:34:17.056178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 09:34:17.056253       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 09:34:18.655858       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:34:18.655888       1 metrics.go:72] Registering metrics
	I1115 09:34:18.655955       1 controller.go:711] "Syncing nftables rules"
	I1115 09:34:27.061939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:27.062074       1 main.go:301] handling current node
	I1115 09:34:37.055802       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:37.055963       1 main.go:301] handling current node
	I1115 09:34:47.055142       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:47.055170       1 main.go:301] handling current node
	I1115 09:34:57.055733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:57.055766       1 main.go:301] handling current node
	I1115 09:35:07.060612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:35:07.060650       1 main.go:301] handling current node
	I1115 09:35:17.055306       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:35:17.055342       1 main.go:301] handling current node
	I1115 09:35:27.055712       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:35:27.055744       1 main.go:301] handling current node
	I1115 09:35:37.055790       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:35:37.055831       1 main.go:301] handling current node
	I1115 09:35:47.059232       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:35:47.059263       1 main.go:301] handling current node
	I1115 09:35:57.054746       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:35:57.054776       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658] <==
	E1115 09:34:27.674490       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.118.116:443: connect: connection refused" logger="UnhandledError"
	W1115 09:34:27.674933       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.118.116:443: connect: connection refused
	E1115 09:34:27.674967       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.118.116:443: connect: connection refused" logger="UnhandledError"
	W1115 09:34:27.749582       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.118.116:443: connect: connection refused
	E1115 09:34:27.749653       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.118.116:443: connect: connection refused" logger="UnhandledError"
	W1115 09:34:49.862979       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:34:49.863027       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1115 09:34:49.863039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1115 09:34:49.864170       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:34:49.864253       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1115 09:34:49.864266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1115 09:34:54.990264       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.32.193:443: connect: connection refused" logger="UnhandledError"
	W1115 09:34:54.990726       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:34:54.990803       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 09:34:54.991601       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.32.193:443: connect: connection refused" logger="UnhandledError"
	E1115 09:34:54.996828       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.32.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.32.193:443: connect: connection refused" logger="UnhandledError"
	I1115 09:34:55.125254       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1115 09:35:39.255794       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34566: use of closed network connection
	E1115 09:35:39.410950       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34592: use of closed network connection
	
	
	==> kube-controller-manager [dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9] <==
	I1115 09:33:43.437339       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 09:33:43.437395       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-612806"
	I1115 09:33:43.437431       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 09:33:43.440573       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 09:33:43.440616       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 09:33:43.440697       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 09:33:43.441073       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 09:33:43.441565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:33:43.441702       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 09:33:43.442597       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 09:33:43.452142       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 09:33:43.455463       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 09:33:43.455477       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1115 09:33:49.042160       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1115 09:33:49.072438       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1115 09:34:13.377726       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:34:13.377869       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1115 09:34:13.377930       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 09:34:13.426235       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1115 09:34:13.431378       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 09:34:13.478214       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:34:13.531792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:34:28.443243       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1115 09:34:43.486262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:34:43.544860       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507] <==
	I1115 09:33:46.879698       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:33:46.971285       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:33:47.153809       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:33:47.153834       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:33:47.153910       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:33:47.333275       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:33:47.334479       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:33:47.340903       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:33:47.341209       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:33:47.341223       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:33:47.345189       1 config.go:200] "Starting service config controller"
	I1115 09:33:47.356460       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:33:47.346362       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:33:47.361398       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:33:47.347124       1 config.go:309] "Starting node config controller"
	I1115 09:33:47.361886       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:33:47.364097       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:33:47.346342       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:33:47.364118       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:33:47.364140       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:33:47.457993       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:33:47.462205       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f] <==
	I1115 09:33:36.434961       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:33:36.438437       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:33:36.438547       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 09:33:36.461945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 09:33:36.470464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:33:36.470716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:33:36.470819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:33:36.470906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:33:36.471015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:33:36.471112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:33:36.471196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:33:36.471329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:33:36.471420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:33:36.471494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:33:36.471561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:33:36.471636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:33:36.471712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:33:36.471815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:33:36.471901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:33:36.471999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:33:36.472043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:33:36.472089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:33:37.301240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:33:37.414295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1115 09:33:37.938752       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:35:59 addons-612806 kubelet[1265]: I1115 09:35:59.496757    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f" (UID: "2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 15 09:35:59 addons-612806 kubelet[1265]: I1115 09:35:59.496808    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953" (OuterVolumeSpecName: "data") pod "2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f" (UID: "2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f"). InnerVolumeSpecName "pvc-656ebd50-b53f-48f0-84f4-4943fda1a953". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 15 09:35:59 addons-612806 kubelet[1265]: I1115 09:35:59.503362    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f-kube-api-access-g9n27" (OuterVolumeSpecName: "kube-api-access-g9n27") pod "2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f" (UID: "2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f"). InnerVolumeSpecName "kube-api-access-g9n27". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 15 09:35:59 addons-612806 kubelet[1265]: I1115 09:35:59.597904    1265 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f-gcp-creds\") on node \"addons-612806\" DevicePath \"\""
	Nov 15 09:35:59 addons-612806 kubelet[1265]: I1115 09:35:59.597950    1265 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g9n27\" (UniqueName: \"kubernetes.io/projected/2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f-kube-api-access-g9n27\") on node \"addons-612806\" DevicePath \"\""
	Nov 15 09:35:59 addons-612806 kubelet[1265]: I1115 09:35:59.597966    1265 reconciler_common.go:299] "Volume detached for volume \"pvc-656ebd50-b53f-48f0-84f4-4943fda1a953\" (UniqueName: \"kubernetes.io/host-path/2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953\") on node \"addons-612806\" DevicePath \"\""
	Nov 15 09:36:00 addons-612806 kubelet[1265]: I1115 09:36:00.404274    1265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7f853dea67b81c7c16e67e9dddbfcabc07caf659e26f789e5118749ce5831a"
	Nov 15 09:36:00 addons-612806 kubelet[1265]: I1115 09:36:00.585301    1265 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f" path="/var/lib/kubelet/pods/2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f/volumes"
	Nov 15 09:36:00 addons-612806 kubelet[1265]: I1115 09:36:00.822257    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4daf721c-0437-4b3c-b0d5-1ffc49b92564-data\") pod \"helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953\" (UID: \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\") " pod="local-path-storage/helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953"
	Nov 15 09:36:00 addons-612806 kubelet[1265]: I1115 09:36:00.822336    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4daf721c-0437-4b3c-b0d5-1ffc49b92564-script\") pod \"helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953\" (UID: \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\") " pod="local-path-storage/helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953"
	Nov 15 09:36:00 addons-612806 kubelet[1265]: I1115 09:36:00.822391    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsklk\" (UniqueName: \"kubernetes.io/projected/4daf721c-0437-4b3c-b0d5-1ffc49b92564-kube-api-access-jsklk\") pod \"helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953\" (UID: \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\") " pod="local-path-storage/helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953"
	Nov 15 09:36:00 addons-612806 kubelet[1265]: I1115 09:36:00.822455    1265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4daf721c-0437-4b3c-b0d5-1ffc49b92564-gcp-creds\") pod \"helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953\" (UID: \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\") " pod="local-path-storage/helper-pod-delete-pvc-656ebd50-b53f-48f0-84f4-4943fda1a953"
	Nov 15 09:36:01 addons-612806 kubelet[1265]: W1115 09:36:01.125349    1265 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/438186eb0f36a036ea5c74b2b6bbabcf99f3c1e979698fe0c6a8a6ab6acd5430/crio-77f456658117942465e1055e6adeabc66fae9fa0c73530c461db5a8d0bd7167e WatchSource:0}: Error finding container 77f456658117942465e1055e6adeabc66fae9fa0c73530c461db5a8d0bd7167e: Status 404 returned error can't find the container with id 77f456658117942465e1055e6adeabc66fae9fa0c73530c461db5a8d0bd7167e
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.540358    1265 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsklk\" (UniqueName: \"kubernetes.io/projected/4daf721c-0437-4b3c-b0d5-1ffc49b92564-kube-api-access-jsklk\") pod \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\" (UID: \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\") "
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.540440    1265 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4daf721c-0437-4b3c-b0d5-1ffc49b92564-data\") pod \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\" (UID: \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\") "
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.540462    1265 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4daf721c-0437-4b3c-b0d5-1ffc49b92564-script\") pod \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\" (UID: \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\") "
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.540508    1265 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4daf721c-0437-4b3c-b0d5-1ffc49b92564-gcp-creds\") pod \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\" (UID: \"4daf721c-0437-4b3c-b0d5-1ffc49b92564\") "
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.541744    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4daf721c-0437-4b3c-b0d5-1ffc49b92564-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "4daf721c-0437-4b3c-b0d5-1ffc49b92564" (UID: "4daf721c-0437-4b3c-b0d5-1ffc49b92564"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.542099    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4daf721c-0437-4b3c-b0d5-1ffc49b92564-data" (OuterVolumeSpecName: "data") pod "4daf721c-0437-4b3c-b0d5-1ffc49b92564" (UID: "4daf721c-0437-4b3c-b0d5-1ffc49b92564"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.542390    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4daf721c-0437-4b3c-b0d5-1ffc49b92564-script" (OuterVolumeSpecName: "script") pod "4daf721c-0437-4b3c-b0d5-1ffc49b92564" (UID: "4daf721c-0437-4b3c-b0d5-1ffc49b92564"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.559070    1265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4daf721c-0437-4b3c-b0d5-1ffc49b92564-kube-api-access-jsklk" (OuterVolumeSpecName: "kube-api-access-jsklk") pod "4daf721c-0437-4b3c-b0d5-1ffc49b92564" (UID: "4daf721c-0437-4b3c-b0d5-1ffc49b92564"). InnerVolumeSpecName "kube-api-access-jsklk". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.641119    1265 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4daf721c-0437-4b3c-b0d5-1ffc49b92564-gcp-creds\") on node \"addons-612806\" DevicePath \"\""
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.641157    1265 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jsklk\" (UniqueName: \"kubernetes.io/projected/4daf721c-0437-4b3c-b0d5-1ffc49b92564-kube-api-access-jsklk\") on node \"addons-612806\" DevicePath \"\""
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.641171    1265 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4daf721c-0437-4b3c-b0d5-1ffc49b92564-data\") on node \"addons-612806\" DevicePath \"\""
	Nov 15 09:36:02 addons-612806 kubelet[1265]: I1115 09:36:02.641193    1265 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4daf721c-0437-4b3c-b0d5-1ffc49b92564-script\") on node \"addons-612806\" DevicePath \"\""
	
	
	==> storage-provisioner [2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939] <==
	W1115 09:35:38.794786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:40.798189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:40.802967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:42.806161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:42.810564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:44.813371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:44.818160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:46.821383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:46.825859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:48.829842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:48.834378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:50.838257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:50.843506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:52.852142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:52.860404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:54.865818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:54.873578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:56.878962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:56.895367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:58.901058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:58.906496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:36:00.909793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:36:00.915142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:36:02.922916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:36:02.928441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-612806 -n addons-612806
helpers_test.go:269: (dbg) Run:  kubectl --context addons-612806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-8zwkg ingress-nginx-admission-patch-4m8hk registry-creds-764b6fb674-kpz66
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-612806 describe pod ingress-nginx-admission-create-8zwkg ingress-nginx-admission-patch-4m8hk registry-creds-764b6fb674-kpz66
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-612806 describe pod ingress-nginx-admission-create-8zwkg ingress-nginx-admission-patch-4m8hk registry-creds-764b6fb674-kpz66: exit status 1 (93.318733ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8zwkg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4m8hk" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-kpz66" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-612806 describe pod ingress-nginx-admission-create-8zwkg ingress-nginx-admission-patch-4m8hk registry-creds-764b6fb674-kpz66: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable headlamp --alsologtostderr -v=1: exit status 11 (290.698964ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:36:03.953584  524675 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:36:03.954519  524675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:03.954550  524675 out.go:374] Setting ErrFile to fd 2...
	I1115 09:36:03.954570  524675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:03.954849  524675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:36:03.955135  524675 mustload.go:66] Loading cluster: addons-612806
	I1115 09:36:03.955518  524675 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:03.955550  524675 addons.go:607] checking whether the cluster is paused
	I1115 09:36:03.955678  524675 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:03.955704  524675 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:36:03.956167  524675 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:36:03.988724  524675 ssh_runner.go:195] Run: systemctl --version
	I1115 09:36:03.988778  524675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:36:04.008222  524675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:36:04.112213  524675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:36:04.112306  524675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:36:04.141722  524675 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:36:04.141753  524675 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:36:04.141758  524675 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:36:04.141763  524675 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:36:04.141766  524675 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:36:04.141770  524675 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:36:04.141773  524675 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:36:04.141776  524675 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:36:04.141779  524675 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:36:04.141789  524675 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:36:04.141796  524675 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:36:04.141799  524675 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:36:04.141802  524675 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:36:04.141805  524675 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:36:04.141808  524675 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:36:04.141825  524675 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:36:04.141834  524675 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:36:04.141839  524675 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:36:04.141842  524675 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:36:04.141845  524675 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:36:04.141850  524675 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:36:04.141853  524675 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:36:04.141860  524675 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:36:04.141866  524675 cri.go:89] found id: ""
	I1115 09:36:04.141920  524675 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:36:04.157577  524675 out.go:203] 
	W1115 09:36:04.160712  524675 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:36:04.160745  524675 out.go:285] * 
	* 
	W1115 09:36:04.167496  524675 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:36:04.170399  524675 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-lxjc6" [9aacef30-cfee-4287-abce-b6c541d3c7a1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005115203s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (558.267539ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:36:00.008894  524023 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:36:00.013053  524023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:00.013079  524023 out.go:374] Setting ErrFile to fd 2...
	I1115 09:36:00.013086  524023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:00.013425  524023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:36:00.013847  524023 mustload.go:66] Loading cluster: addons-612806
	I1115 09:36:00.014251  524023 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:00.014265  524023 addons.go:607] checking whether the cluster is paused
	I1115 09:36:00.014368  524023 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:00.014380  524023 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:36:00.014917  524023 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:36:00.092950  524023 ssh_runner.go:195] Run: systemctl --version
	I1115 09:36:00.093017  524023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:36:00.136340  524023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:36:00.371450  524023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:36:00.371553  524023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:36:00.447644  524023 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:36:00.447676  524023 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:36:00.447684  524023 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:36:00.447689  524023 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:36:00.447693  524023 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:36:00.447697  524023 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:36:00.447701  524023 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:36:00.447705  524023 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:36:00.447708  524023 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:36:00.447720  524023 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:36:00.447724  524023 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:36:00.447727  524023 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:36:00.447730  524023 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:36:00.447734  524023 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:36:00.447737  524023 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:36:00.447745  524023 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:36:00.447749  524023 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:36:00.447754  524023 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:36:00.447757  524023 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:36:00.447760  524023 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:36:00.447765  524023 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:36:00.447768  524023 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:36:00.447773  524023 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:36:00.447776  524023 cri.go:89] found id: ""
	I1115 09:36:00.447853  524023 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:36:00.470842  524023 out.go:203] 
	W1115 09:36:00.474787  524023 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:36:00.474818  524023 out.go:285] * 
	* 
	W1115 09:36:00.485676  524023 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:36:00.488725  524023 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-612806 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-612806 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-612806 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2a9d6a8f-8dfd-4c5e-9334-e036aa30cf2f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004288324s
addons_test.go:967: (dbg) Run:  kubectl --context addons-612806 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 ssh "cat /opt/local-path-provisioner/pvc-656ebd50-b53f-48f0-84f4-4943fda1a953_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-612806 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-612806 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (388.358918ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:36:00.864484  524136 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:36:00.865224  524136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:00.865237  524136 out.go:374] Setting ErrFile to fd 2...
	I1115 09:36:00.865243  524136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:36:00.865517  524136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:36:00.865871  524136 mustload.go:66] Loading cluster: addons-612806
	I1115 09:36:00.866252  524136 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:00.866265  524136 addons.go:607] checking whether the cluster is paused
	I1115 09:36:00.866367  524136 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:36:00.866377  524136 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:36:00.866810  524136 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:36:00.892631  524136 ssh_runner.go:195] Run: systemctl --version
	I1115 09:36:00.892690  524136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:36:00.945955  524136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:36:01.056547  524136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:36:01.056640  524136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:36:01.116856  524136 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:36:01.116884  524136 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:36:01.116890  524136 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:36:01.116894  524136 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:36:01.116897  524136 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:36:01.116901  524136 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:36:01.116906  524136 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:36:01.116909  524136 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:36:01.116912  524136 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:36:01.116923  524136 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:36:01.116930  524136 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:36:01.116933  524136 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:36:01.116937  524136 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:36:01.116940  524136 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:36:01.116943  524136 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:36:01.116951  524136 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:36:01.116960  524136 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:36:01.116966  524136 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:36:01.116969  524136 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:36:01.116972  524136 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:36:01.116977  524136 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:36:01.116980  524136 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:36:01.116983  524136 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:36:01.116986  524136 cri.go:89] found id: ""
	I1115 09:36:01.117043  524136 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:36:01.149124  524136 out.go:203] 
	W1115 09:36:01.152734  524136 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:36:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:36:01.152763  524136 out.go:285] * 
	* 
	W1115 09:36:01.165565  524136 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:36:01.169582  524136 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-b6hwh" [80e44b45-0912-4867-a446-4542a1ec2a13] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003350991s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (263.530465ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:35:51.996206  523637 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:35:51.997021  523637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:35:51.997057  523637 out.go:374] Setting ErrFile to fd 2...
	I1115 09:35:51.997079  523637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:35:51.997346  523637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:35:51.997684  523637 mustload.go:66] Loading cluster: addons-612806
	I1115 09:35:51.998062  523637 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:35:51.998103  523637 addons.go:607] checking whether the cluster is paused
	I1115 09:35:51.998230  523637 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:35:51.998263  523637 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:35:51.998713  523637 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:35:52.017875  523637 ssh_runner.go:195] Run: systemctl --version
	I1115 09:35:52.017932  523637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:35:52.036566  523637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:35:52.145639  523637 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:35:52.145730  523637 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:35:52.180585  523637 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:35:52.180614  523637 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:35:52.180620  523637 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:35:52.180625  523637 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:35:52.180629  523637 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:35:52.180633  523637 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:35:52.180636  523637 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:35:52.180639  523637 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:35:52.180643  523637 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:35:52.180654  523637 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:35:52.180664  523637 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:35:52.180668  523637 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:35:52.180672  523637 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:35:52.180675  523637 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:35:52.180678  523637 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:35:52.180685  523637 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:35:52.180691  523637 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:35:52.180696  523637 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:35:52.180699  523637 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:35:52.180702  523637 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:35:52.180707  523637 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:35:52.180722  523637 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:35:52.180725  523637 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:35:52.180728  523637 cri.go:89] found id: ""
	I1115 09:35:52.180781  523637 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:35:52.196828  523637 out.go:203] 
	W1115 09:35:52.200279  523637 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:35:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:35:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:35:52.200306  523637 out.go:285] * 
	* 
	W1115 09:35:52.207053  523637 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:35:52.211194  523637 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-b4gnf" [ad7600b3-1b58-40b8-b011-f9753b5a9fbd] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004758407s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-612806 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-612806 addons disable yakd --alsologtostderr -v=1: exit status 11 (263.613783ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:35:45.733466  523540 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:35:45.734301  523540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:35:45.734324  523540 out.go:374] Setting ErrFile to fd 2...
	I1115 09:35:45.734330  523540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:35:45.734616  523540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:35:45.734927  523540 mustload.go:66] Loading cluster: addons-612806
	I1115 09:35:45.735301  523540 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:35:45.735319  523540 addons.go:607] checking whether the cluster is paused
	I1115 09:35:45.735422  523540 config.go:182] Loaded profile config "addons-612806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:35:45.735438  523540 host.go:66] Checking if "addons-612806" exists ...
	I1115 09:35:45.735875  523540 cli_runner.go:164] Run: docker container inspect addons-612806 --format={{.State.Status}}
	I1115 09:35:45.758833  523540 ssh_runner.go:195] Run: systemctl --version
	I1115 09:35:45.758905  523540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-612806
	I1115 09:35:45.779813  523540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/addons-612806/id_rsa Username:docker}
	I1115 09:35:45.884413  523540 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:35:45.884497  523540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:35:45.915655  523540 cri.go:89] found id: "f5d0536bcdade987d1ee40efcb0425454c5b852a370da0a1b8ce45600155c73c"
	I1115 09:35:45.915679  523540 cri.go:89] found id: "2760bb90e56dac70887af43b5c6bf3084e39ea1892c5a32e31ce4bce26608561"
	I1115 09:35:45.915684  523540 cri.go:89] found id: "33e20be214f16ef7aabe9b2524d436973e0bf09543489372279825561b86f082"
	I1115 09:35:45.915688  523540 cri.go:89] found id: "746d8de6dedd7449b28e706fa0bf55c591faded80f2a702475a3efa9ed37b554"
	I1115 09:35:45.915692  523540 cri.go:89] found id: "7bea5772f4a3681d83360f0d1daacb246e670c7eb29c0f48d9cd981e6f18247f"
	I1115 09:35:45.915695  523540 cri.go:89] found id: "b5e54ce202660ea9037fd058996c030f3671f05be752bd7900b7c5dc51169b41"
	I1115 09:35:45.915698  523540 cri.go:89] found id: "2aa4139d8dd55b1ac6839a10734c5298c70380386869ae70d3945f9d39c5bfb0"
	I1115 09:35:45.915701  523540 cri.go:89] found id: "12a854b5199da71a9939c36272d9b82a75ffa531ecac42b769b095c6f6db7441"
	I1115 09:35:45.915704  523540 cri.go:89] found id: "bf2b5a6db5940a4ca894db6b7d20804400650877e88f68f568cb2d751d3ba723"
	I1115 09:35:45.915713  523540 cri.go:89] found id: "075d53e5906ffe7f561ce89a67c10b62d0e467c0e032977674559a69a20ef70c"
	I1115 09:35:45.915717  523540 cri.go:89] found id: "b85dde7237c9eb2602222bf508dffc29dfc012b778171cff21731861f63149d3"
	I1115 09:35:45.915721  523540 cri.go:89] found id: "0f3e60922b61256faf2e61c97bd657a8cfc311fddc0384e9f6f21c5cfae67816"
	I1115 09:35:45.915724  523540 cri.go:89] found id: "d8ad2af91929f3cfc685bc6ddfd8542ac9c4f0b0bc662c877f8badcab1cc3a67"
	I1115 09:35:45.915787  523540 cri.go:89] found id: "44f41ef9e3625bae6b2198d3fa1862495f65281a5f30733bff6b379c09a44c93"
	I1115 09:35:45.915791  523540 cri.go:89] found id: "0c821a004b52727b4a9ad00be31ba1f5a1a83b4ed635d74f02946498d86d4376"
	I1115 09:35:45.915796  523540 cri.go:89] found id: "a25068fa2e690fddf51fdfaa46b59b2ed4402b63dfa482493312f46a503a00e7"
	I1115 09:35:45.915799  523540 cri.go:89] found id: "2a3c8692022a21961dd83af8c946a29c7ee81d410af602697945c2596925b939"
	I1115 09:35:45.915803  523540 cri.go:89] found id: "38ee32437965dca2aaf56bc1432b2cf127cd2eb22c2dd1038fc120bb13f57507"
	I1115 09:35:45.915806  523540 cri.go:89] found id: "19fe4bfa7943ac31fd1de61cafbe5dd68e0036f7394c6b1b98c252a1fcbe1d7d"
	I1115 09:35:45.915809  523540 cri.go:89] found id: "b546f11eac5f32df9fc8bfb0829305ff41b00e9d2279470e3ef564470b86d314"
	I1115 09:35:45.915814  523540 cri.go:89] found id: "2d41c4d4be99c005acd9ff1da84f0675a862df6fd80922c0fc023b1b5dc2a658"
	I1115 09:35:45.915817  523540 cri.go:89] found id: "a834825c233e4171ba8cd2d8a57fde3f97002d173fe60ddc0a80e2a3d4bb689f"
	I1115 09:35:45.915820  523540 cri.go:89] found id: "dc26ca1097619a7e0e283d30e2c4f15a2a602cf8eb15fd90c63dbde77dd23ae9"
	I1115 09:35:45.915823  523540 cri.go:89] found id: ""
	I1115 09:35:45.915873  523540 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:35:45.930758  523540 out.go:203] 
	W1115 09:35:45.933795  523540 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:35:45.933825  523540 out.go:285] * 
	* 
	W1115 09:35:45.940500  523540 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:35:45.943296  523540 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-612806 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-755106 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-755106 expose deployment hello-node-connect --type=NodePort --port=8080
I1115 09:42:59.109343  516637 retry.go:31] will retry after 2.56424196s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:c53f315c-6b88-4472-b165-dedcd2c543a7 ResourceVersion:691 Generation:0 CreationTimestamp:2025-11-15 09:42:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x4001550c20 VolumeMode:0x4001550c30 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-vnnmd" [ee34115b-cb43-4169-b8e8-9ced901de6c7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-755106 -n functional-755106
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-15 09:52:59.519026709 +0000 UTC m=+1223.761480680
functional_test.go:1645: (dbg) Run:  kubectl --context functional-755106 describe po hello-node-connect-7d85dfc575-vnnmd -n default
functional_test.go:1645: (dbg) kubectl --context functional-755106 describe po hello-node-connect-7d85dfc575-vnnmd -n default:
Name:             hello-node-connect-7d85dfc575-vnnmd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-755106/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:42:59 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2ppfb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2ppfb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vnnmd to functional-755106
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-755106 logs hello-node-connect-7d85dfc575-vnnmd -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-755106 logs hello-node-connect-7d85dfc575-vnnmd -n default: exit status 1 (100.73551ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vnnmd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-755106 logs hello-node-connect-7d85dfc575-vnnmd -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-755106 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-vnnmd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-755106/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:42:59 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2ppfb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2ppfb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vnnmd to functional-755106
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-755106 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-755106 logs -l app=hello-node-connect: exit status 1 (83.11293ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vnnmd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-755106 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-755106 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.108.188
IPs:                      10.102.108.188
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30708/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-755106
helpers_test.go:243: (dbg) docker inspect functional-755106:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df7608da83320502fb84a5660b8953d133508c50d302b494fc8754c8b6d7c753",
	        "Created": "2025-11-15T09:39:57.794107087Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 532334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:39:57.873123008Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/df7608da83320502fb84a5660b8953d133508c50d302b494fc8754c8b6d7c753/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df7608da83320502fb84a5660b8953d133508c50d302b494fc8754c8b6d7c753/hostname",
	        "HostsPath": "/var/lib/docker/containers/df7608da83320502fb84a5660b8953d133508c50d302b494fc8754c8b6d7c753/hosts",
	        "LogPath": "/var/lib/docker/containers/df7608da83320502fb84a5660b8953d133508c50d302b494fc8754c8b6d7c753/df7608da83320502fb84a5660b8953d133508c50d302b494fc8754c8b6d7c753-json.log",
	        "Name": "/functional-755106",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-755106:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-755106",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df7608da83320502fb84a5660b8953d133508c50d302b494fc8754c8b6d7c753",
	                "LowerDir": "/var/lib/docker/overlay2/28bfe048c68b7276ecf34f62a669bbbcb6e7442d59c5c4d673923166fb32cd20-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/28bfe048c68b7276ecf34f62a669bbbcb6e7442d59c5c4d673923166fb32cd20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/28bfe048c68b7276ecf34f62a669bbbcb6e7442d59c5c4d673923166fb32cd20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/28bfe048c68b7276ecf34f62a669bbbcb6e7442d59c5c4d673923166fb32cd20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-755106",
	                "Source": "/var/lib/docker/volumes/functional-755106/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-755106",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-755106",
	                "name.minikube.sigs.k8s.io": "functional-755106",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aac27219bb4d590d77e3fa3bcf04590edf1335a565c3b388de8f7b4bb79187de",
	            "SandboxKey": "/var/run/docker/netns/aac27219bb4d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-755106": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:61:b1:42:fd:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1d3545ec744e87ab6427142cfe2793ed87b34f31c1c72efdd79c8f4be17e205d",
	                    "EndpointID": "f0fa3c255984b351a49296b5e5dde7955efb9909d87a0322e51665546f1c301f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-755106",
	                        "df7608da8332"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-755106 -n functional-755106
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 logs -n 25: (1.466153123s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-755106 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ kubectl │ functional-755106 kubectl -- --context functional-755106 get pods                                                          │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:41 UTC │
	│ start   │ -p functional-755106 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:41 UTC │ 15 Nov 25 09:42 UTC │
	│ service │ invalid-svc -p functional-755106                                                                                           │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │                     │
	│ cp      │ functional-755106 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ config  │ functional-755106 config unset cpus                                                                                        │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ config  │ functional-755106 config get cpus                                                                                          │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │                     │
	│ config  │ functional-755106 config set cpus 2                                                                                        │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ config  │ functional-755106 config get cpus                                                                                          │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ ssh     │ functional-755106 ssh -n functional-755106 sudo cat /home/docker/cp-test.txt                                               │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ config  │ functional-755106 config unset cpus                                                                                        │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ config  │ functional-755106 config get cpus                                                                                          │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │                     │
	│ ssh     │ functional-755106 ssh echo hello                                                                                           │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ cp      │ functional-755106 cp functional-755106:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2174683691/001/cp-test.txt │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ ssh     │ functional-755106 ssh cat /etc/hostname                                                                                    │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ ssh     │ functional-755106 ssh -n functional-755106 sudo cat /home/docker/cp-test.txt                                               │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ tunnel  │ functional-755106 tunnel --alsologtostderr                                                                                 │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │                     │
	│ tunnel  │ functional-755106 tunnel --alsologtostderr                                                                                 │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │                     │
	│ cp      │ functional-755106 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ tunnel  │ functional-755106 tunnel --alsologtostderr                                                                                 │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │                     │
	│ ssh     │ functional-755106 ssh -n functional-755106 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ addons  │ functional-755106 addons list                                                                                              │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	│ addons  │ functional-755106 addons list -o json                                                                                      │ functional-755106 │ jenkins │ v1.37.0 │ 15 Nov 25 09:42 UTC │ 15 Nov 25 09:42 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:41:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:41:51.330477  536565 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:41:51.330581  536565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:41:51.330632  536565 out.go:374] Setting ErrFile to fd 2...
	I1115 09:41:51.330637  536565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:41:51.330896  536565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:41:51.331282  536565 out.go:368] Setting JSON to false
	I1115 09:41:51.332304  536565 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15863,"bootTime":1763183849,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 09:41:51.332372  536565 start.go:143] virtualization:  
	I1115 09:41:51.335968  536565 out.go:179] * [functional-755106] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 09:41:51.339765  536565 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:41:51.339856  536565 notify.go:221] Checking for updates...
	I1115 09:41:51.345674  536565 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:41:51.348637  536565 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:41:51.351527  536565 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 09:41:51.354491  536565 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 09:41:51.357479  536565 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:41:51.360857  536565 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:41:51.360968  536565 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:41:51.396148  536565 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 09:41:51.396248  536565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:41:51.463756  536565 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-15 09:41:51.453049288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:41:51.463853  536565 docker.go:319] overlay module found
	I1115 09:41:51.466851  536565 out.go:179] * Using the docker driver based on existing profile
	I1115 09:41:51.469844  536565 start.go:309] selected driver: docker
	I1115 09:41:51.469854  536565 start.go:930] validating driver "docker" against &{Name:functional-755106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-755106 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:41:51.469943  536565 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:41:51.470052  536565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:41:51.533825  536565 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-15 09:41:51.524108113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:41:51.534226  536565 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:41:51.534250  536565 cni.go:84] Creating CNI manager for ""
	I1115 09:41:51.534305  536565 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:41:51.534343  536565 start.go:353] cluster config:
	{Name:functional-755106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-755106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:41:51.537454  536565 out.go:179] * Starting "functional-755106" primary control-plane node in "functional-755106" cluster
	I1115 09:41:51.540240  536565 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:41:51.543104  536565 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:41:51.546170  536565 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:41:51.546306  536565 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:41:51.546327  536565 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 09:41:51.546333  536565 cache.go:65] Caching tarball of preloaded images
	I1115 09:41:51.546407  536565 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 09:41:51.546416  536565 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:41:51.546519  536565 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/config.json ...
	I1115 09:41:51.564183  536565 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:41:51.564193  536565 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:41:51.564203  536565 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:41:51.564223  536565 start.go:360] acquireMachinesLock for functional-755106: {Name:mkb4e286d4fd2dd58a3ca3bfdb0cd11feaa936ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:41:51.564276  536565 start.go:364] duration metric: took 36.758µs to acquireMachinesLock for "functional-755106"
	I1115 09:41:51.564295  536565 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:41:51.564299  536565 fix.go:54] fixHost starting: 
	I1115 09:41:51.564563  536565 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
	I1115 09:41:51.588007  536565 fix.go:112] recreateIfNeeded on functional-755106: state=Running err=<nil>
	W1115 09:41:51.588034  536565 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:41:51.591228  536565 out.go:252] * Updating the running docker "functional-755106" container ...
	I1115 09:41:51.591251  536565 machine.go:94] provisionDockerMachine start ...
	I1115 09:41:51.591338  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:41:51.608391  536565 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:51.608714  536565 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1115 09:41:51.608721  536565 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:41:51.760944  536565 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-755106
	
	I1115 09:41:51.760957  536565 ubuntu.go:182] provisioning hostname "functional-755106"
	I1115 09:41:51.761018  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:41:51.780021  536565 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:51.780317  536565 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1115 09:41:51.780326  536565 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-755106 && echo "functional-755106" | sudo tee /etc/hostname
	I1115 09:41:51.942806  536565 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-755106
	
	I1115 09:41:51.942878  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:41:51.967989  536565 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:51.968293  536565 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1115 09:41:51.968308  536565 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-755106' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-755106/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-755106' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:41:52.126099  536565 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:41:52.126117  536565 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 09:41:52.126142  536565 ubuntu.go:190] setting up certificates
	I1115 09:41:52.126151  536565 provision.go:84] configureAuth start
	I1115 09:41:52.126212  536565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-755106
	I1115 09:41:52.143415  536565 provision.go:143] copyHostCerts
	I1115 09:41:52.143485  536565 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 09:41:52.143502  536565 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 09:41:52.143575  536565 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 09:41:52.143666  536565 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 09:41:52.143670  536565 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 09:41:52.143693  536565 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 09:41:52.143741  536565 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 09:41:52.143744  536565 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 09:41:52.143767  536565 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 09:41:52.143811  536565 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.functional-755106 san=[127.0.0.1 192.168.49.2 functional-755106 localhost minikube]
	I1115 09:41:52.424107  536565 provision.go:177] copyRemoteCerts
	I1115 09:41:52.424159  536565 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:41:52.424207  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:41:52.441567  536565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
	I1115 09:41:52.545489  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:41:52.562754  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 09:41:52.580675  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:41:52.597825  536565 provision.go:87] duration metric: took 471.653244ms to configureAuth
	I1115 09:41:52.597842  536565 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:41:52.598034  536565 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:41:52.598141  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:41:52.615719  536565 main.go:143] libmachine: Using SSH client type: native
	I1115 09:41:52.616042  536565 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1115 09:41:52.616055  536565 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:41:57.997272  536565 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:41:57.997286  536565 machine.go:97] duration metric: took 6.406027893s to provisionDockerMachine
	I1115 09:41:57.997295  536565 start.go:293] postStartSetup for "functional-755106" (driver="docker")
	I1115 09:41:57.997305  536565 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:41:57.997364  536565 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:41:57.997401  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:41:58.017856  536565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
	I1115 09:41:58.125620  536565 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:41:58.128934  536565 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:41:58.128953  536565 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:41:58.128962  536565 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 09:41:58.129016  536565 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 09:41:58.129087  536565 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 09:41:58.129179  536565 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/test/nested/copy/516637/hosts -> hosts in /etc/test/nested/copy/516637
	I1115 09:41:58.129222  536565 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/516637
	I1115 09:41:58.136648  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 09:41:58.154188  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/test/nested/copy/516637/hosts --> /etc/test/nested/copy/516637/hosts (40 bytes)
	I1115 09:41:58.171861  536565 start.go:296] duration metric: took 174.551085ms for postStartSetup
	I1115 09:41:58.171939  536565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:41:58.172005  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:41:58.189485  536565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
	I1115 09:41:58.290803  536565 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:41:58.295421  536565 fix.go:56] duration metric: took 6.731113368s for fixHost
	I1115 09:41:58.295437  536565 start.go:83] releasing machines lock for "functional-755106", held for 6.731154295s
	I1115 09:41:58.295529  536565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-755106
	I1115 09:41:58.312889  536565 ssh_runner.go:195] Run: cat /version.json
	I1115 09:41:58.312911  536565 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:41:58.312932  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:41:58.312991  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:41:58.333981  536565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
	I1115 09:41:58.346145  536565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
	I1115 09:41:58.524845  536565 ssh_runner.go:195] Run: systemctl --version
	I1115 09:41:58.531363  536565 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:41:58.567692  536565 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:41:58.572236  536565 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:41:58.572306  536565 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:41:58.580014  536565 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:41:58.580028  536565 start.go:496] detecting cgroup driver to use...
	I1115 09:41:58.580059  536565 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 09:41:58.580106  536565 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:41:58.595693  536565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:41:58.608884  536565 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:41:58.608937  536565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:41:58.624427  536565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:41:58.637546  536565 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:41:58.773918  536565 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:41:58.907471  536565 docker.go:234] disabling docker service ...
	I1115 09:41:58.907557  536565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:41:58.922683  536565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:41:58.935639  536565 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:41:59.073378  536565 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:41:59.215058  536565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:41:59.228557  536565 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:41:59.242241  536565 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:41:59.242295  536565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:59.251495  536565 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 09:41:59.251570  536565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:59.260242  536565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:59.268944  536565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:59.277839  536565 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:41:59.285902  536565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:59.294730  536565 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:59.303511  536565 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:41:59.312039  536565 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:41:59.319544  536565 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:41:59.327386  536565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:41:59.464174  536565 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:42:05.888140  536565 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.423942512s)
	I1115 09:42:05.888156  536565 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:42:05.888207  536565 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:42:05.891973  536565 start.go:564] Will wait 60s for crictl version
	I1115 09:42:05.892030  536565 ssh_runner.go:195] Run: which crictl
	I1115 09:42:05.895491  536565 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:42:05.923306  536565 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:42:05.923381  536565 ssh_runner.go:195] Run: crio --version
	I1115 09:42:05.954089  536565 ssh_runner.go:195] Run: crio --version
	I1115 09:42:05.986613  536565 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:42:05.989589  536565 cli_runner.go:164] Run: docker network inspect functional-755106 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:42:06.009638  536565 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:42:06.017234  536565 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1115 09:42:06.020238  536565 kubeadm.go:884] updating cluster {Name:functional-755106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-755106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:42:06.020389  536565 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:42:06.020459  536565 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:42:06.059794  536565 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:42:06.059806  536565 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:42:06.059867  536565 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:42:06.085347  536565 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:42:06.085359  536565 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:42:06.085366  536565 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1115 09:42:06.085467  536565 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-755106 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-755106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:42:06.085549  536565 ssh_runner.go:195] Run: crio config
	I1115 09:42:06.157312  536565 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1115 09:42:06.157334  536565 cni.go:84] Creating CNI manager for ""
	I1115 09:42:06.157342  536565 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:42:06.157360  536565 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:42:06.157384  536565 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-755106 NodeName:functional-755106 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:42:06.157518  536565 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-755106"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:42:06.157646  536565 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:42:06.165430  536565 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:42:06.165491  536565 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:42:06.173695  536565 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 09:42:06.186619  536565 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:42:06.199583  536565 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1115 09:42:06.212421  536565 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:42:06.216113  536565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:42:06.353387  536565 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:42:06.366980  536565 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106 for IP: 192.168.49.2
	I1115 09:42:06.366991  536565 certs.go:195] generating shared ca certs ...
	I1115 09:42:06.367005  536565 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:42:06.367165  536565 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 09:42:06.367215  536565 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 09:42:06.367230  536565 certs.go:257] generating profile certs ...
	I1115 09:42:06.367328  536565 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.key
	I1115 09:42:06.367387  536565 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/apiserver.key.b3817f93
	I1115 09:42:06.367431  536565 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/proxy-client.key
	I1115 09:42:06.367547  536565 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 09:42:06.367609  536565 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 09:42:06.367619  536565 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 09:42:06.367651  536565 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:42:06.367681  536565 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:42:06.367710  536565 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 09:42:06.367763  536565 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 09:42:06.368482  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:42:06.388169  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:42:06.405663  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:42:06.424529  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:42:06.441504  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 09:42:06.458641  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:42:06.475997  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:42:06.492887  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:42:06.510546  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 09:42:06.527830  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 09:42:06.545016  536565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:42:06.561857  536565 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:42:06.574785  536565 ssh_runner.go:195] Run: openssl version
	I1115 09:42:06.580808  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 09:42:06.588858  536565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 09:42:06.592390  536565 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 09:42:06.592444  536565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 09:42:06.633076  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 09:42:06.640861  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 09:42:06.648726  536565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 09:42:06.652345  536565 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 09:42:06.652396  536565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 09:42:06.697942  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:42:06.705750  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:42:06.713691  536565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:42:06.717334  536565 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:42:06.717386  536565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:42:06.757661  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:42:06.765498  536565 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:42:06.769224  536565 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:42:06.815111  536565 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:42:06.855863  536565 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:42:06.906121  536565 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:42:07.012535  536565 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:42:07.104526  536565 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:42:07.189986  536565 kubeadm.go:401] StartCluster: {Name:functional-755106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-755106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:42:07.190097  536565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:42:07.190165  536565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:42:07.300056  536565 cri.go:89] found id: "4bcc985f378bdd828e885444d8f8b32182050b573f8aefa8ee382214f7900a97"
	I1115 09:42:07.300078  536565 cri.go:89] found id: "640aa85efe01d3aaf4ba16d01361dc78e7516b339f131e045796e5e8b7c467ae"
	I1115 09:42:07.300081  536565 cri.go:89] found id: "427a188f3f3777f9b3b0ebc029aa9864031a10ae537a4436d6d5e6bfeef37fff"
	I1115 09:42:07.300084  536565 cri.go:89] found id: "fcb93ee415b926f78dc112d765468ee9a97e6b83b1a3e80417f1958aa5f08b7b"
	I1115 09:42:07.300086  536565 cri.go:89] found id: "b10a7d37dd8448bccbb1e1df65c8a1a2a1dbcea628747c1544275fb861b96661"
	I1115 09:42:07.300088  536565 cri.go:89] found id: "429ebff84bcedc1afffd2e575f2d90dcb5974df8a0b4330d3f829e4011e8c36f"
	I1115 09:42:07.300090  536565 cri.go:89] found id: "ac27bdc97ed80cf1e1639dfddb2f718b64fcbb712a5612fe729aab3fd6e75015"
	I1115 09:42:07.300092  536565 cri.go:89] found id: "e578bb6c5dc7e15411aaa1845c638dbec6513e60aacc71ce9c9de4780e3aa0a9"
	I1115 09:42:07.300094  536565 cri.go:89] found id: "dc85a7482c9545e43a392a8f3350523d5ea34c5c122c5a48a90ebc6c80cb66f0"
	I1115 09:42:07.300100  536565 cri.go:89] found id: "55f8e1f093714bc5cac28db8db52d2995c6e013516f7507f49f2c078ab94fc3e"
	I1115 09:42:07.300102  536565 cri.go:89] found id: "cfcd700592b5f0751d3467f36ec79e92a1f0f8dbcf7380e3cae00f83a4d05f5e"
	I1115 09:42:07.300104  536565 cri.go:89] found id: "a5c87a6e7fe9eef4379bba2a7e001ad179782fa823867dc7035f00e3f1ea50d8"
	I1115 09:42:07.300106  536565 cri.go:89] found id: "14e25ebbb887c8887215051aaa5fa590ba390042cf207fc03ae6674e9add9ea1"
	I1115 09:42:07.300109  536565 cri.go:89] found id: "8195793bcf63974b4aee3b06b19d0ca41cdde8714a4afd2a13b6fe503283d83b"
	I1115 09:42:07.300111  536565 cri.go:89] found id: ""
	I1115 09:42:07.300169  536565 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 09:42:07.320298  536565 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:42:07Z" level=error msg="open /run/runc: no such file or directory"
	I1115 09:42:07.320385  536565 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:42:07.333727  536565 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 09:42:07.333736  536565 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 09:42:07.333810  536565 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 09:42:07.350045  536565 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:42:07.352968  536565 kubeconfig.go:125] found "functional-755106" server: "https://192.168.49.2:8441"
	I1115 09:42:07.356488  536565 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 09:42:07.377459  536565 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-15 09:40:08.147263489 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-15 09:42:06.207815784 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1115 09:42:07.377497  536565 kubeadm.go:1161] stopping kube-system containers ...
	I1115 09:42:07.377508  536565 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1115 09:42:07.377565  536565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:42:07.423845  536565 cri.go:89] found id: "4bcc985f378bdd828e885444d8f8b32182050b573f8aefa8ee382214f7900a97"
	I1115 09:42:07.423856  536565 cri.go:89] found id: "640aa85efe01d3aaf4ba16d01361dc78e7516b339f131e045796e5e8b7c467ae"
	I1115 09:42:07.423859  536565 cri.go:89] found id: "427a188f3f3777f9b3b0ebc029aa9864031a10ae537a4436d6d5e6bfeef37fff"
	I1115 09:42:07.423862  536565 cri.go:89] found id: "fcb93ee415b926f78dc112d765468ee9a97e6b83b1a3e80417f1958aa5f08b7b"
	I1115 09:42:07.423864  536565 cri.go:89] found id: "b10a7d37dd8448bccbb1e1df65c8a1a2a1dbcea628747c1544275fb861b96661"
	I1115 09:42:07.423867  536565 cri.go:89] found id: "429ebff84bcedc1afffd2e575f2d90dcb5974df8a0b4330d3f829e4011e8c36f"
	I1115 09:42:07.423869  536565 cri.go:89] found id: "ac27bdc97ed80cf1e1639dfddb2f718b64fcbb712a5612fe729aab3fd6e75015"
	I1115 09:42:07.423871  536565 cri.go:89] found id: "e578bb6c5dc7e15411aaa1845c638dbec6513e60aacc71ce9c9de4780e3aa0a9"
	I1115 09:42:07.423884  536565 cri.go:89] found id: "dc85a7482c9545e43a392a8f3350523d5ea34c5c122c5a48a90ebc6c80cb66f0"
	I1115 09:42:07.423892  536565 cri.go:89] found id: "55f8e1f093714bc5cac28db8db52d2995c6e013516f7507f49f2c078ab94fc3e"
	I1115 09:42:07.423895  536565 cri.go:89] found id: "cfcd700592b5f0751d3467f36ec79e92a1f0f8dbcf7380e3cae00f83a4d05f5e"
	I1115 09:42:07.423897  536565 cri.go:89] found id: "a5c87a6e7fe9eef4379bba2a7e001ad179782fa823867dc7035f00e3f1ea50d8"
	I1115 09:42:07.423899  536565 cri.go:89] found id: "14e25ebbb887c8887215051aaa5fa590ba390042cf207fc03ae6674e9add9ea1"
	I1115 09:42:07.423901  536565 cri.go:89] found id: "8195793bcf63974b4aee3b06b19d0ca41cdde8714a4afd2a13b6fe503283d83b"
	I1115 09:42:07.423904  536565 cri.go:89] found id: ""
	I1115 09:42:07.423909  536565 cri.go:252] Stopping containers: [4bcc985f378bdd828e885444d8f8b32182050b573f8aefa8ee382214f7900a97 640aa85efe01d3aaf4ba16d01361dc78e7516b339f131e045796e5e8b7c467ae 427a188f3f3777f9b3b0ebc029aa9864031a10ae537a4436d6d5e6bfeef37fff fcb93ee415b926f78dc112d765468ee9a97e6b83b1a3e80417f1958aa5f08b7b b10a7d37dd8448bccbb1e1df65c8a1a2a1dbcea628747c1544275fb861b96661 429ebff84bcedc1afffd2e575f2d90dcb5974df8a0b4330d3f829e4011e8c36f ac27bdc97ed80cf1e1639dfddb2f718b64fcbb712a5612fe729aab3fd6e75015 e578bb6c5dc7e15411aaa1845c638dbec6513e60aacc71ce9c9de4780e3aa0a9 dc85a7482c9545e43a392a8f3350523d5ea34c5c122c5a48a90ebc6c80cb66f0 55f8e1f093714bc5cac28db8db52d2995c6e013516f7507f49f2c078ab94fc3e cfcd700592b5f0751d3467f36ec79e92a1f0f8dbcf7380e3cae00f83a4d05f5e a5c87a6e7fe9eef4379bba2a7e001ad179782fa823867dc7035f00e3f1ea50d8 14e25ebbb887c8887215051aaa5fa590ba390042cf207fc03ae6674e9add9ea1 8195793bcf63974b4aee3b06b19d0ca41cdde8714a4afd2a13b6fe503283d83b]
	I1115 09:42:07.423985  536565 ssh_runner.go:195] Run: which crictl
	I1115 09:42:07.430253  536565 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 4bcc985f378bdd828e885444d8f8b32182050b573f8aefa8ee382214f7900a97 640aa85efe01d3aaf4ba16d01361dc78e7516b339f131e045796e5e8b7c467ae 427a188f3f3777f9b3b0ebc029aa9864031a10ae537a4436d6d5e6bfeef37fff fcb93ee415b926f78dc112d765468ee9a97e6b83b1a3e80417f1958aa5f08b7b b10a7d37dd8448bccbb1e1df65c8a1a2a1dbcea628747c1544275fb861b96661 429ebff84bcedc1afffd2e575f2d90dcb5974df8a0b4330d3f829e4011e8c36f ac27bdc97ed80cf1e1639dfddb2f718b64fcbb712a5612fe729aab3fd6e75015 e578bb6c5dc7e15411aaa1845c638dbec6513e60aacc71ce9c9de4780e3aa0a9 dc85a7482c9545e43a392a8f3350523d5ea34c5c122c5a48a90ebc6c80cb66f0 55f8e1f093714bc5cac28db8db52d2995c6e013516f7507f49f2c078ab94fc3e cfcd700592b5f0751d3467f36ec79e92a1f0f8dbcf7380e3cae00f83a4d05f5e a5c87a6e7fe9eef4379bba2a7e001ad179782fa823867dc7035f00e3f1ea50d8 14e25ebbb887c8887215051aaa5fa590ba390042cf207fc03ae6674e9add9ea1 8195793bcf63974b4aee3b06b19d0ca41cdde8714a4afd2a13b6fe503283d83b
	I1115 09:42:18.600475  536565 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 4bcc985f378bdd828e885444d8f8b32182050b573f8aefa8ee382214f7900a97 640aa85efe01d3aaf4ba16d01361dc78e7516b339f131e045796e5e8b7c467ae 427a188f3f3777f9b3b0ebc029aa9864031a10ae537a4436d6d5e6bfeef37fff fcb93ee415b926f78dc112d765468ee9a97e6b83b1a3e80417f1958aa5f08b7b b10a7d37dd8448bccbb1e1df65c8a1a2a1dbcea628747c1544275fb861b96661 429ebff84bcedc1afffd2e575f2d90dcb5974df8a0b4330d3f829e4011e8c36f ac27bdc97ed80cf1e1639dfddb2f718b64fcbb712a5612fe729aab3fd6e75015 e578bb6c5dc7e15411aaa1845c638dbec6513e60aacc71ce9c9de4780e3aa0a9 dc85a7482c9545e43a392a8f3350523d5ea34c5c122c5a48a90ebc6c80cb66f0 55f8e1f093714bc5cac28db8db52d2995c6e013516f7507f49f2c078ab94fc3e cfcd700592b5f0751d3467f36ec79e92a1f0f8dbcf7380e3cae00f83a4d05f5e a5c87a6e7fe9eef4379bba2a7e001ad179782fa823867dc7035f00e3f1ea50d8 14e25ebbb887c8887215051aaa5fa590ba390042cf207fc03ae6674e9add9ea1 8195793bcf63974b4aee3b06b19d0ca41cdde8714a4afd2a13b6fe503283d83b:
(11.170185907s)
	I1115 09:42:18.600539  536565 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1115 09:42:18.708717  536565 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:42:18.716612  536565 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 15 09:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 15 09:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 15 09:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 15 09:40 /etc/kubernetes/scheduler.conf
	
	I1115 09:42:18.716678  536565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1115 09:42:18.724316  536565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1115 09:42:18.731877  536565 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:42:18.731942  536565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:42:18.739296  536565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1115 09:42:18.747048  536565 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:42:18.747105  536565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:42:18.754383  536565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1115 09:42:18.761783  536565 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:42:18.761842  536565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:42:18.769193  536565 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:42:18.777116  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:42:18.823117  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:42:21.656975  536565 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.833833867s)
	I1115 09:42:21.657033  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:42:21.886183  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:42:21.949043  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:42:22.052038  536565 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:42:22.052115  536565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:42:22.552916  536565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:42:23.053057  536565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:42:23.071282  536565 api_server.go:72] duration metric: took 1.019260935s to wait for apiserver process to appear ...
	I1115 09:42:23.071296  536565 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:42:23.071313  536565 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 09:42:26.678882  536565 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 09:42:26.678899  536565 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 09:42:26.678911  536565 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 09:42:26.714300  536565 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1115 09:42:26.714317  536565 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1115 09:42:27.071800  536565 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 09:42:27.084456  536565 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 09:42:27.084472  536565 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 09:42:27.572127  536565 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 09:42:27.580228  536565 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 09:42:27.580244  536565 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 09:42:28.071510  536565 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 09:42:28.080050  536565 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1115 09:42:28.093788  536565 api_server.go:141] control plane version: v1.34.1
	I1115 09:42:28.093806  536565 api_server.go:131] duration metric: took 5.022505022s to wait for apiserver health ...
	I1115 09:42:28.093815  536565 cni.go:84] Creating CNI manager for ""
	I1115 09:42:28.093822  536565 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:42:28.097374  536565 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 09:42:28.100459  536565 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:42:28.104633  536565 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 09:42:28.104644  536565 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:42:28.117672  536565 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:42:28.559493  536565 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:42:28.563135  536565 system_pods.go:59] 8 kube-system pods found
	I1115 09:42:28.563155  536565 system_pods.go:61] "coredns-66bc5c9577-lh8vr" [83dc5375-568e-41e8-9bc2-f902ec9e9ef6] Running
	I1115 09:42:28.563164  536565 system_pods.go:61] "etcd-functional-755106" [cec8e609-2f89-40c8-ade0-333096cb5a47] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:42:28.563169  536565 system_pods.go:61] "kindnet-sxbmt" [3376df71-a51b-4e87-ab83-02d408af1a87] Running
	I1115 09:42:28.563177  536565 system_pods.go:61] "kube-apiserver-functional-755106" [f752bdb5-5896-482b-b39f-90e20462cd3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:42:28.563182  536565 system_pods.go:61] "kube-controller-manager-functional-755106" [c5017e57-2992-44b3-88b5-5c3c8e03dcf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:42:28.563187  536565 system_pods.go:61] "kube-proxy-s2xdm" [eb6e2171-ccae-4e0a-84bd-df06dff6ddc2] Running
	I1115 09:42:28.563193  536565 system_pods.go:61] "kube-scheduler-functional-755106" [e21bf7ec-7922-47ac-9ffd-4ee235caed58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:42:28.563200  536565 system_pods.go:61] "storage-provisioner" [f9c7c0f2-96cd-4980-89f2-400915c56162] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:28.563204  536565 system_pods.go:74] duration metric: took 3.696746ms to wait for pod list to return data ...
	I1115 09:42:28.563211  536565 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:42:28.565780  536565 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 09:42:28.565798  536565 node_conditions.go:123] node cpu capacity is 2
	I1115 09:42:28.565808  536565 node_conditions.go:105] duration metric: took 2.593098ms to run NodePressure ...
	I1115 09:42:28.565868  536565 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1115 09:42:28.833668  536565 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1115 09:42:28.836965  536565 kubeadm.go:744] kubelet initialised
	I1115 09:42:28.836975  536565 kubeadm.go:745] duration metric: took 3.295124ms waiting for restarted kubelet to initialise ...
	I1115 09:42:28.836990  536565 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:42:28.846555  536565 ops.go:34] apiserver oom_adj: -16
	I1115 09:42:28.846566  536565 kubeadm.go:602] duration metric: took 21.512824842s to restartPrimaryControlPlane
	I1115 09:42:28.846574  536565 kubeadm.go:403] duration metric: took 21.656597331s to StartCluster
	I1115 09:42:28.846588  536565 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:42:28.846650  536565 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:42:28.847266  536565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:42:28.847515  536565 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:42:28.847775  536565 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:42:28.847807  536565 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:42:28.847907  536565 addons.go:70] Setting storage-provisioner=true in profile "functional-755106"
	I1115 09:42:28.847922  536565 addons.go:239] Setting addon storage-provisioner=true in "functional-755106"
	W1115 09:42:28.847926  536565 addons.go:248] addon storage-provisioner should already be in state true
	I1115 09:42:28.847928  536565 addons.go:70] Setting default-storageclass=true in profile "functional-755106"
	I1115 09:42:28.847944  536565 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-755106"
	I1115 09:42:28.847946  536565 host.go:66] Checking if "functional-755106" exists ...
	I1115 09:42:28.848266  536565 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
	I1115 09:42:28.848382  536565 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
	I1115 09:42:28.853479  536565 out.go:179] * Verifying Kubernetes components...
	I1115 09:42:28.856331  536565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:42:28.882825  536565 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:42:28.886272  536565 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:42:28.886286  536565 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:42:28.886363  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:42:28.890953  536565 addons.go:239] Setting addon default-storageclass=true in "functional-755106"
	W1115 09:42:28.890964  536565 addons.go:248] addon default-storageclass should already be in state true
	I1115 09:42:28.890989  536565 host.go:66] Checking if "functional-755106" exists ...
	I1115 09:42:28.891565  536565 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
	I1115 09:42:28.930099  536565 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:42:28.930111  536565 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:42:28.930173  536565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:42:28.930324  536565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
	I1115 09:42:28.957927  536565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
	I1115 09:42:29.082230  536565 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:42:29.117252  536565 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:42:29.120137  536565 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:42:29.920215  536565 node_ready.go:35] waiting up to 6m0s for node "functional-755106" to be "Ready" ...
	I1115 09:42:29.922996  536565 node_ready.go:49] node "functional-755106" is "Ready"
	I1115 09:42:29.923010  536565 node_ready.go:38] duration metric: took 2.779382ms for node "functional-755106" to be "Ready" ...
	I1115 09:42:29.923022  536565 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:42:29.923083  536565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:42:29.931210  536565 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 09:42:29.934054  536565 addons.go:515] duration metric: took 1.086236406s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 09:42:29.937938  536565 api_server.go:72] duration metric: took 1.09039603s to wait for apiserver process to appear ...
	I1115 09:42:29.937952  536565 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:42:29.937973  536565 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1115 09:42:29.946865  536565 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1115 09:42:29.947801  536565 api_server.go:141] control plane version: v1.34.1
	I1115 09:42:29.947813  536565 api_server.go:131] duration metric: took 9.855106ms to wait for apiserver health ...
	I1115 09:42:29.947820  536565 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:42:29.950733  536565 system_pods.go:59] 8 kube-system pods found
	I1115 09:42:29.950746  536565 system_pods.go:61] "coredns-66bc5c9577-lh8vr" [83dc5375-568e-41e8-9bc2-f902ec9e9ef6] Running
	I1115 09:42:29.950755  536565 system_pods.go:61] "etcd-functional-755106" [cec8e609-2f89-40c8-ade0-333096cb5a47] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:42:29.950760  536565 system_pods.go:61] "kindnet-sxbmt" [3376df71-a51b-4e87-ab83-02d408af1a87] Running
	I1115 09:42:29.950766  536565 system_pods.go:61] "kube-apiserver-functional-755106" [f752bdb5-5896-482b-b39f-90e20462cd3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:42:29.950772  536565 system_pods.go:61] "kube-controller-manager-functional-755106" [c5017e57-2992-44b3-88b5-5c3c8e03dcf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:42:29.950776  536565 system_pods.go:61] "kube-proxy-s2xdm" [eb6e2171-ccae-4e0a-84bd-df06dff6ddc2] Running
	I1115 09:42:29.950783  536565 system_pods.go:61] "kube-scheduler-functional-755106" [e21bf7ec-7922-47ac-9ffd-4ee235caed58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:42:29.950789  536565 system_pods.go:61] "storage-provisioner" [f9c7c0f2-96cd-4980-89f2-400915c56162] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:29.950793  536565 system_pods.go:74] duration metric: took 2.968636ms to wait for pod list to return data ...
	I1115 09:42:29.950799  536565 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:42:29.952951  536565 default_sa.go:45] found service account: "default"
	I1115 09:42:29.952961  536565 default_sa.go:55] duration metric: took 2.157418ms for default service account to be created ...
	I1115 09:42:29.952968  536565 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:42:29.955522  536565 system_pods.go:86] 8 kube-system pods found
	I1115 09:42:29.955536  536565 system_pods.go:89] "coredns-66bc5c9577-lh8vr" [83dc5375-568e-41e8-9bc2-f902ec9e9ef6] Running
	I1115 09:42:29.955545  536565 system_pods.go:89] "etcd-functional-755106" [cec8e609-2f89-40c8-ade0-333096cb5a47] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:42:29.955550  536565 system_pods.go:89] "kindnet-sxbmt" [3376df71-a51b-4e87-ab83-02d408af1a87] Running
	I1115 09:42:29.955557  536565 system_pods.go:89] "kube-apiserver-functional-755106" [f752bdb5-5896-482b-b39f-90e20462cd3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:42:29.955562  536565 system_pods.go:89] "kube-controller-manager-functional-755106" [c5017e57-2992-44b3-88b5-5c3c8e03dcf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:42:29.955571  536565 system_pods.go:89] "kube-proxy-s2xdm" [eb6e2171-ccae-4e0a-84bd-df06dff6ddc2] Running
	I1115 09:42:29.955577  536565 system_pods.go:89] "kube-scheduler-functional-755106" [e21bf7ec-7922-47ac-9ffd-4ee235caed58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:42:29.955581  536565 system_pods.go:89] "storage-provisioner" [f9c7c0f2-96cd-4980-89f2-400915c56162] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:42:29.955587  536565 system_pods.go:126] duration metric: took 2.614734ms to wait for k8s-apps to be running ...
	I1115 09:42:29.955593  536565 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:42:29.955649  536565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:42:29.969166  536565 system_svc.go:56] duration metric: took 13.56209ms WaitForService to wait for kubelet
	I1115 09:42:29.969184  536565 kubeadm.go:587] duration metric: took 1.121647304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:42:29.969201  536565 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:42:29.972804  536565 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 09:42:29.972820  536565 node_conditions.go:123] node cpu capacity is 2
	I1115 09:42:29.972829  536565 node_conditions.go:105] duration metric: took 3.623722ms to run NodePressure ...
	I1115 09:42:29.972841  536565 start.go:242] waiting for startup goroutines ...
	I1115 09:42:29.972848  536565 start.go:247] waiting for cluster config update ...
	I1115 09:42:29.972858  536565 start.go:256] writing updated cluster config ...
	I1115 09:42:29.973156  536565 ssh_runner.go:195] Run: rm -f paused
	I1115 09:42:29.976666  536565 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:42:29.980140  536565 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lh8vr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:29.984797  536565 pod_ready.go:94] pod "coredns-66bc5c9577-lh8vr" is "Ready"
	I1115 09:42:29.984810  536565 pod_ready.go:86] duration metric: took 4.657111ms for pod "coredns-66bc5c9577-lh8vr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:29.987523  536565 pod_ready.go:83] waiting for pod "etcd-functional-755106" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 09:42:31.993097  536565 pod_ready.go:104] pod "etcd-functional-755106" is not "Ready", error: <nil>
	W1115 09:42:34.493681  536565 pod_ready.go:104] pod "etcd-functional-755106" is not "Ready", error: <nil>
	W1115 09:42:36.495185  536565 pod_ready.go:104] pod "etcd-functional-755106" is not "Ready", error: <nil>
	I1115 09:42:36.992694  536565 pod_ready.go:94] pod "etcd-functional-755106" is "Ready"
	I1115 09:42:36.992710  536565 pod_ready.go:86] duration metric: took 7.00517236s for pod "etcd-functional-755106" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:36.995439  536565 pod_ready.go:83] waiting for pod "kube-apiserver-functional-755106" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 09:42:39.001196  536565 pod_ready.go:104] pod "kube-apiserver-functional-755106" is not "Ready", error: <nil>
	I1115 09:42:40.501168  536565 pod_ready.go:94] pod "kube-apiserver-functional-755106" is "Ready"
	I1115 09:42:40.501182  536565 pod_ready.go:86] duration metric: took 3.505730764s for pod "kube-apiserver-functional-755106" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:40.503647  536565 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-755106" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:40.508286  536565 pod_ready.go:94] pod "kube-controller-manager-functional-755106" is "Ready"
	I1115 09:42:40.508299  536565 pod_ready.go:86] duration metric: took 4.640357ms for pod "kube-controller-manager-functional-755106" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:40.510414  536565 pod_ready.go:83] waiting for pod "kube-proxy-s2xdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:40.515238  536565 pod_ready.go:94] pod "kube-proxy-s2xdm" is "Ready"
	I1115 09:42:40.515251  536565 pod_ready.go:86] duration metric: took 4.825951ms for pod "kube-proxy-s2xdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:40.517785  536565 pod_ready.go:83] waiting for pod "kube-scheduler-functional-755106" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:40.698738  536565 pod_ready.go:94] pod "kube-scheduler-functional-755106" is "Ready"
	I1115 09:42:40.698753  536565 pod_ready.go:86] duration metric: took 180.955998ms for pod "kube-scheduler-functional-755106" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:42:40.698764  536565 pod_ready.go:40] duration metric: took 10.722079362s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:42:40.752766  536565 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 09:42:40.756356  536565 out.go:179] * Done! kubectl is now configured to use "functional-755106" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:43:20 functional-755106 crio[3591]: time="2025-11-15T09:43:20.103556643Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bae28b65-b1ff-4544-9643-e05c14052cc4 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.106153538Z" level=info msg="Removing container: 348e8f967676caf0d51ae5a85e12f68a841947cf8b8fc2e8d3a21ff848b99fdd" id=7119c36c-69c6-4ab8-8260-1bf8cae3e155 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.108467867Z" level=info msg="Error loading conmon cgroup of container 348e8f967676caf0d51ae5a85e12f68a841947cf8b8fc2e8d3a21ff848b99fdd: cgroup deleted" id=7119c36c-69c6-4ab8-8260-1bf8cae3e155 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.112138373Z" level=info msg="Removed container 348e8f967676caf0d51ae5a85e12f68a841947cf8b8fc2e8d3a21ff848b99fdd: default/sp-pod/myfrontend" id=7119c36c-69c6-4ab8-8260-1bf8cae3e155 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.114077271Z" level=info msg="Stopping pod sandbox: a96d43a5ff9cc89a6af1d080bcadd15f01d14ae62e740d83f8175ded0e490759" id=cd54e979-3489-4afe-9946-4c79df17d2e3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.114132071Z" level=info msg="Stopped pod sandbox (already stopped): a96d43a5ff9cc89a6af1d080bcadd15f01d14ae62e740d83f8175ded0e490759" id=cd54e979-3489-4afe-9946-4c79df17d2e3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.11475867Z" level=info msg="Removing pod sandbox: a96d43a5ff9cc89a6af1d080bcadd15f01d14ae62e740d83f8175ded0e490759" id=d43f13ee-77c3-406a-90f2-b74ed9b453af name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.119432676Z" level=info msg="Removed pod sandbox: a96d43a5ff9cc89a6af1d080bcadd15f01d14ae62e740d83f8175ded0e490759" id=d43f13ee-77c3-406a-90f2-b74ed9b453af name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.121219955Z" level=info msg="Stopping pod sandbox: cf27d8f4ad2ea0e36935cb1208f08d9272c7b42a31c103c1f5cb3846e3e73285" id=c10b7d1c-6287-432a-bb31-f91fe123fc0e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.121281147Z" level=info msg="Stopped pod sandbox (already stopped): cf27d8f4ad2ea0e36935cb1208f08d9272c7b42a31c103c1f5cb3846e3e73285" id=c10b7d1c-6287-432a-bb31-f91fe123fc0e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.124015656Z" level=info msg="Removing pod sandbox: cf27d8f4ad2ea0e36935cb1208f08d9272c7b42a31c103c1f5cb3846e3e73285" id=ac0a1686-5a01-4f7a-b9d0-72b4f76b4f75 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.127924876Z" level=info msg="Removed pod sandbox: cf27d8f4ad2ea0e36935cb1208f08d9272c7b42a31c103c1f5cb3846e3e73285" id=ac0a1686-5a01-4f7a-b9d0-72b4f76b4f75 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.128762907Z" level=info msg="Stopping pod sandbox: 89fb0df7b95a98a7027a161e23204b5a5bd95f2cc0bf251d7b006226be6b767b" id=e7927e1e-00bf-4f56-8d47-26ec003371e8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.128810897Z" level=info msg="Stopped pod sandbox (already stopped): 89fb0df7b95a98a7027a161e23204b5a5bd95f2cc0bf251d7b006226be6b767b" id=e7927e1e-00bf-4f56-8d47-26ec003371e8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.129203058Z" level=info msg="Removing pod sandbox: 89fb0df7b95a98a7027a161e23204b5a5bd95f2cc0bf251d7b006226be6b767b" id=99c7069f-41a2-4358-83e7-538a8beaaa8a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 09:43:22 functional-755106 crio[3591]: time="2025-11-15T09:43:22.133076906Z" level=info msg="Removed pod sandbox: 89fb0df7b95a98a7027a161e23204b5a5bd95f2cc0bf251d7b006226be6b767b" id=99c7069f-41a2-4358-83e7-538a8beaaa8a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 15 09:43:35 functional-755106 crio[3591]: time="2025-11-15T09:43:35.028020488Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=918c4096-93d0-4db3-b9ba-5929d105c187 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:43:37 functional-755106 crio[3591]: time="2025-11-15T09:43:37.028480567Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9f55cd05-7a5e-4611-8364-d47775766e92 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:44:02 functional-755106 crio[3591]: time="2025-11-15T09:44:02.028362644Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1b3d38ed-d5bb-47b5-a77b-8a8b39e22765 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:44:19 functional-755106 crio[3591]: time="2025-11-15T09:44:19.027412621Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=88aae372-4ea9-4327-9bed-1b17bbf4951f name=/runtime.v1.ImageService/PullImage
	Nov 15 09:44:42 functional-755106 crio[3591]: time="2025-11-15T09:44:42.033342819Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f74c39e1-ac1d-47d2-9da1-78e85b9d56a1 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:45:51 functional-755106 crio[3591]: time="2025-11-15T09:45:51.028261891Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=be307fec-5180-4269-bfd2-60ff72c59136 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:46:16 functional-755106 crio[3591]: time="2025-11-15T09:46:16.029147497Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=59277550-62d5-4454-a17b-16d1de971fa2 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:48:37 functional-755106 crio[3591]: time="2025-11-15T09:48:37.028013997Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=506a73f6-8398-4e87-b31a-ddf36dd2bdb8 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:49:07 functional-755106 crio[3591]: time="2025-11-15T09:49:07.028119898Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2f7a9dd2-b061-4ef3-9bdd-77089569aec8 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	64a08af7ab75c       docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33   9 minutes ago       Running             myfrontend                0                   7fab77151347b       sp-pod                                      default
	550d307a135e4       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   c844677f1399d       nginx-svc                                   default
	229f90c6a2304       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       5                   4406945b0b80e       storage-provisioner                         kube-system
	ad58294b8ece9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   22578ae7a2481       kindnet-sxbmt                               kube-system
	90cf08816778b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   16ca9fd18f0f5       kube-proxy-s2xdm                            kube-system
	d863cb5003441       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       4                   4406945b0b80e       storage-provisioner                         kube-system
	af96a77f728f7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   c3d89fdf08cc0       kube-apiserver-functional-755106            kube-system
	673b9f8d4ca8f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   cb36dc078e60c       kube-controller-manager-functional-755106   kube-system
	4c7150206b30f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   5841b7229a69d       kube-scheduler-functional-755106            kube-system
	e9d2f8a01c2d1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   dc3894f902399       etcd-functional-755106                      kube-system
	c086f594f6da6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   1d3ac357a6c09       coredns-66bc5c9577-lh8vr                    kube-system
	4bcc985f378bd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Exited              kube-scheduler            2                   5841b7229a69d       kube-scheduler-functional-755106            kube-system
	427a188f3f377       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Exited              kube-controller-manager   2                   cb36dc078e60c       kube-controller-manager-functional-755106   kube-system
	fcb93ee415b92       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Exited              kindnet-cni               2                   22578ae7a2481       kindnet-sxbmt                               kube-system
	b10a7d37dd844       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Exited              kube-proxy                2                   16ca9fd18f0f5       kube-proxy-s2xdm                            kube-system
	429ebff84bced       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Exited              etcd                      2                   dc3894f902399       etcd-functional-755106                      kube-system
	a5c87a6e7fe9e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   1d3ac357a6c09       coredns-66bc5c9577-lh8vr                    kube-system
	
	
	==> coredns [a5c87a6e7fe9eef4379bba2a7e001ad179782fa823867dc7035f00e3f1ea50d8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34970 - 40086 "HINFO IN 2162550636184396960.5909422308317381838. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020691072s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c086f594f6da654b663300795f5ad63f782b7d0456f5135a1aaaa94000d1af20] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39623 - 13640 "HINFO IN 4506725310225092981.6198760617015164905. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017718342s
	
	
	==> describe nodes <==
	Name:               functional-755106
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-755106
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=functional-755106
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_40_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:40:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-755106
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:52:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:52:28 +0000   Sat, 15 Nov 2025 09:40:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:52:28 +0000   Sat, 15 Nov 2025 09:40:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:52:28 +0000   Sat, 15 Nov 2025 09:40:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:52:28 +0000   Sat, 15 Nov 2025 09:41:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-755106
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                6461f260-dd6f-417d-86cd-79d392117e3a
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-mgr4f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  default                     hello-node-connect-7d85dfc575-vnnmd          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 coredns-66bc5c9577-lh8vr                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-755106                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-sxbmt                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-755106             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-755106    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-s2xdm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-755106             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-755106 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-755106 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-755106 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-755106 event: Registered Node functional-755106 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-755106 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-755106 event: Registered Node functional-755106 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-755106 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-755106 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-755106 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-755106 event: Registered Node functional-755106 in Controller
	
	
	==> dmesg <==
	[Nov15 09:14] overlayfs: idmapped layers are currently not supported
	[ +52.677127] overlayfs: idmapped layers are currently not supported
	[Nov15 09:15] overlayfs: idmapped layers are currently not supported
	[ +18.264224] overlayfs: idmapped layers are currently not supported
	[Nov15 09:16] overlayfs: idmapped layers are currently not supported
	[Nov15 09:17] overlayfs: idmapped layers are currently not supported
	[Nov15 09:19] overlayfs: idmapped layers are currently not supported
	[ +25.565300] overlayfs: idmapped layers are currently not supported
	[Nov15 09:20] overlayfs: idmapped layers are currently not supported
	[Nov15 09:21] overlayfs: idmapped layers are currently not supported
	[Nov15 09:22] overlayfs: idmapped layers are currently not supported
	[ +46.757118] overlayfs: idmapped layers are currently not supported
	[Nov15 09:23] overlayfs: idmapped layers are currently not supported
	[ +24.765155] overlayfs: idmapped layers are currently not supported
	[Nov15 09:24] overlayfs: idmapped layers are currently not supported
	[Nov15 09:25] overlayfs: idmapped layers are currently not supported
	[Nov15 09:26] overlayfs: idmapped layers are currently not supported
	[Nov15 09:27] overlayfs: idmapped layers are currently not supported
	[ +25.160027] overlayfs: idmapped layers are currently not supported
	[Nov15 09:29] overlayfs: idmapped layers are currently not supported
	[ +40.626123] overlayfs: idmapped layers are currently not supported
	[Nov15 09:32] kauditd_printk_skb: 8 callbacks suppressed
	[Nov15 09:33] overlayfs: idmapped layers are currently not supported
	[Nov15 09:39] overlayfs: idmapped layers are currently not supported
	[Nov15 09:40] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [429ebff84bcedc1afffd2e575f2d90dcb5974df8a0b4330d3f829e4011e8c36f] <==
	{"level":"warn","ts":"2025-11-15T09:42:09.654627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:09.669464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:09.690946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:09.715713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:09.731335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:09.744071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:09.834239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40452","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:42:18.458659Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T09:42:18.458707Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-755106","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-15T09:42:18.458814Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:42:18.460400Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:42:18.460458Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:42:18.460490Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-15T09:42:18.460547Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-15T09:42:18.460540Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T09:42:18.460572Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T09:42:18.460581Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:42:18.460562Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-15T09:42:18.460611Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T09:42:18.460622Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T09:42:18.460629Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:42:18.464412Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-15T09:42:18.464511Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:42:18.464536Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-15T09:42:18.464544Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-755106","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e9d2f8a01c2d1acc24a7265265497679cf25485acd597b1a6f86a7e05c5b8a0c] <==
	{"level":"warn","ts":"2025-11-15T09:42:25.049279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.066669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.104189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.131724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.153799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.184716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.208594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.241027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.269182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.296783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.315441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.347247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.369706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.399700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.450938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.463080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.492474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.522365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.550940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.589789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.606286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:42:25.753702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38606","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:52:23.670066Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1096}
	{"level":"info","ts":"2025-11-15T09:52:23.694042Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1096,"took":"23.678548ms","hash":2555891978,"current-db-size-bytes":3280896,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1384448,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-15T09:52:23.694128Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2555891978,"revision":1096,"compact-revision":-1}
	
	
	==> kernel <==
	 09:53:01 up  4:35,  0 user,  load average: 0.19, 0.40, 1.19
	Linux functional-755106 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ad58294b8ece94ab8ceeb38bf4f0588bd66ebc9ab2785fe1e39fbbbe8034f3a6] <==
	I1115 09:50:57.622089       1 main.go:301] handling current node
	I1115 09:51:07.622068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:51:07.622104       1 main.go:301] handling current node
	I1115 09:51:17.625838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:51:17.625881       1 main.go:301] handling current node
	I1115 09:51:27.621260       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:51:27.621383       1 main.go:301] handling current node
	I1115 09:51:37.624783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:51:37.624820       1 main.go:301] handling current node
	I1115 09:51:47.622234       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:51:47.622339       1 main.go:301] handling current node
	I1115 09:51:57.622272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:51:57.622308       1 main.go:301] handling current node
	I1115 09:52:07.625671       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:52:07.625707       1 main.go:301] handling current node
	I1115 09:52:17.622061       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:52:17.622093       1 main.go:301] handling current node
	I1115 09:52:27.622037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:52:27.622147       1 main.go:301] handling current node
	I1115 09:52:37.622952       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:52:37.623143       1 main.go:301] handling current node
	I1115 09:52:47.626110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:52:47.626147       1 main.go:301] handling current node
	I1115 09:52:57.625683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:52:57.625721       1 main.go:301] handling current node
	
	
	==> kindnet [fcb93ee415b926f78dc112d765468ee9a97e6b83b1a3e80417f1958aa5f08b7b] <==
	I1115 09:42:07.250306       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:42:07.252127       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1115 09:42:07.253118       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:42:07.253292       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:42:07.253347       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:42:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:42:07.468487       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:42:07.468547       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:42:07.468592       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:42:07.469311       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:42:10.568861       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:42:10.568985       1 metrics.go:72] Registering metrics
	I1115 09:42:10.569108       1 controller.go:711] "Syncing nftables rules"
	I1115 09:42:17.473684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:42:17.473745       1 main.go:301] handling current node
	
	
	==> kube-apiserver [af96a77f728f74cc8de041dd2153417465f6ca5f96ef020c7a15a9f185f98707] <==
	I1115 09:42:26.771330       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 09:42:26.771363       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 09:42:26.771393       1 cache.go:39] Caches are synced for autoregister controller
	I1115 09:42:26.771761       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 09:42:26.771842       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 09:42:26.750893       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 09:42:26.782905       1 policy_source.go:240] refreshing policies
	I1115 09:42:26.783588       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 09:42:26.832073       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:42:27.019960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:42:27.538229       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1115 09:42:27.870887       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1115 09:42:27.872369       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:42:27.879483       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:42:28.552697       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:42:28.679656       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:42:28.773176       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:42:28.781913       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:42:44.104495       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.38.232"}
	I1115 09:42:50.573203       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.35.248"}
	I1115 09:42:58.997774       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:42:59.169038       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.108.188"}
	E1115 09:43:13.331695       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1115 09:43:19.851423       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.98.95"}
	I1115 09:52:26.731706       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [427a188f3f3777f9b3b0ebc029aa9864031a10ae537a4436d6d5e6bfeef37fff] <==
	I1115 09:42:09.124679       1 serving.go:386] Generated self-signed cert in-memory
	I1115 09:42:09.575101       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1115 09:42:09.577645       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:42:09.580151       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1115 09:42:09.580483       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 09:42:09.580537       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1115 09:42:09.580547       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [673b9f8d4ca8f63ffd3bbcdb9d7b095242cfa2a3cd272d319691d7244a3bc883] <==
	I1115 09:42:30.099919       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 09:42:30.102821       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 09:42:30.103453       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 09:42:30.104090       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-755106"
	I1115 09:42:30.104205       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 09:42:30.114154       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 09:42:30.117507       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 09:42:30.122673       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 09:42:30.125164       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:42:30.129513       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 09:42:30.129513       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 09:42:30.138751       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:42:30.138938       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 09:42:30.139735       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 09:42:30.139788       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 09:42:30.139932       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 09:42:30.140009       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 09:42:30.140326       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:42:30.142616       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 09:42:30.142634       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:42:30.152107       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:42:30.156531       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 09:42:30.160868       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 09:42:30.163209       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 09:42:30.166905       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [90cf08816778b07da7be9ee71877378b8970b14a35afb612bb65d2d1efc02f3c] <==
	I1115 09:42:27.433856       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:42:27.518529       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:42:27.622895       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:42:27.623000       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:42:27.623144       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:42:27.653197       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:42:27.653255       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:42:27.660651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:42:27.660930       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:42:27.660951       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:42:27.663725       1 config.go:200] "Starting service config controller"
	I1115 09:42:27.663749       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:42:27.663769       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:42:27.663773       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:42:27.663785       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:42:27.663789       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:42:27.664784       1 config.go:309] "Starting node config controller"
	I1115 09:42:27.664805       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:42:27.664811       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:42:27.764579       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:42:27.764621       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:42:27.764662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b10a7d37dd8448bccbb1e1df65c8a1a2a1dbcea628747c1544275fb861b96661] <==
	I1115 09:42:06.996981       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:42:07.107098       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1115 09:42:07.107895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-755106&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1115 09:42:10.607274       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:42:10.607338       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:42:10.607437       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:42:10.626699       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:42:10.626751       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:42:10.630985       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:42:10.631303       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:42:10.631375       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:42:10.635355       1 config.go:200] "Starting service config controller"
	I1115 09:42:10.635377       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:42:10.635397       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:42:10.635401       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:42:10.635413       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:42:10.635417       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:42:10.636175       1 config.go:309] "Starting node config controller"
	I1115 09:42:10.636244       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:42:10.636277       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:42:10.736199       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:42:10.736223       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:42:10.736254       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4bcc985f378bdd828e885444d8f8b32182050b573f8aefa8ee382214f7900a97] <==
	
	
	==> kube-scheduler [4c7150206b30f84f7daa3bdd87aed7389fd744d9c332e8d478ca7060a50dd339] <==
	I1115 09:42:26.044784       1 serving.go:386] Generated self-signed cert in-memory
	I1115 09:42:27.147288       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 09:42:27.147383       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:42:27.157408       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 09:42:27.158010       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 09:42:27.158082       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 09:42:27.158138       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 09:42:27.162237       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:42:27.165842       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:42:27.163589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 09:42:27.166058       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 09:42:27.258818       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 09:42:27.267158       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 09:42:27.267315       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:50:23 functional-755106 kubelet[4241]: E1115 09:50:23.027243    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:50:35 functional-755106 kubelet[4241]: E1115 09:50:35.027132    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:50:36 functional-755106 kubelet[4241]: E1115 09:50:36.026947    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:50:49 functional-755106 kubelet[4241]: E1115 09:50:49.027622    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:50:49 functional-755106 kubelet[4241]: E1115 09:50:49.027709    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:51:01 functional-755106 kubelet[4241]: E1115 09:51:01.027536    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:51:04 functional-755106 kubelet[4241]: E1115 09:51:04.027726    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:51:12 functional-755106 kubelet[4241]: E1115 09:51:12.029771    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:51:17 functional-755106 kubelet[4241]: E1115 09:51:17.027543    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:51:24 functional-755106 kubelet[4241]: E1115 09:51:24.027807    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:51:31 functional-755106 kubelet[4241]: E1115 09:51:31.027864    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:51:36 functional-755106 kubelet[4241]: E1115 09:51:36.027142    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:51:42 functional-755106 kubelet[4241]: E1115 09:51:42.028819    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:51:51 functional-755106 kubelet[4241]: E1115 09:51:51.027595    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:51:56 functional-755106 kubelet[4241]: E1115 09:51:56.026883    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:52:03 functional-755106 kubelet[4241]: E1115 09:52:03.027511    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:52:09 functional-755106 kubelet[4241]: E1115 09:52:09.027258    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:52:16 functional-755106 kubelet[4241]: E1115 09:52:16.027926    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:52:20 functional-755106 kubelet[4241]: E1115 09:52:20.027951    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:52:28 functional-755106 kubelet[4241]: E1115 09:52:28.028330    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:52:32 functional-755106 kubelet[4241]: E1115 09:52:32.028573    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:52:42 functional-755106 kubelet[4241]: E1115 09:52:42.027972    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:52:45 functional-755106 kubelet[4241]: E1115 09:52:45.027357    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	Nov 15 09:52:53 functional-755106 kubelet[4241]: E1115 09:52:53.027028    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-mgr4f" podUID="58b5df24-dd33-4a37-8526-fef0d53cab69"
	Nov 15 09:52:59 functional-755106 kubelet[4241]: E1115 09:52:59.027154    4241 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vnnmd" podUID="ee34115b-cb43-4169-b8e8-9ced901de6c7"
	
	
	==> storage-provisioner [229f90c6a2304550bcaca46ab3d06c164d4d00e7f0280b816738acf3c61081d7] <==
	W1115 09:52:36.104916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:38.108568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:38.112870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:40.116400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:40.121269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:42.127517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:42.136605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:44.139743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:44.146256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:46.149533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:46.153892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:48.156880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:48.163381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:50.167810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:50.172714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:52.176591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:52.186926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:54.190083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:54.194360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:56.203065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:56.208373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:58.211797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:52:58.218385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:53:00.222818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:53:00.246380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d863cb5003441d148fb0acd1611def56ec3c630936cc2ba853d8d53a1c0f8676] <==
	I1115 09:42:27.353096       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 09:42:27.356393       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-755106 -n functional-755106
helpers_test.go:269: (dbg) Run:  kubectl --context functional-755106 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-mgr4f hello-node-connect-7d85dfc575-vnnmd
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-755106 describe pod hello-node-75c85bcc94-mgr4f hello-node-connect-7d85dfc575-vnnmd
helpers_test.go:290: (dbg) kubectl --context functional-755106 describe pod hello-node-75c85bcc94-mgr4f hello-node-connect-7d85dfc575-vnnmd:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-mgr4f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-755106/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:43:19 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kbstb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kbstb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m42s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-mgr4f to functional-755106
	  Normal   Pulling    6m46s (x5 over 9m42s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m46s (x5 over 9m42s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m46s (x5 over 9m42s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m37s (x20 over 9m42s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m22s (x21 over 9m42s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-vnnmd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-755106/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:42:59 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2ppfb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2ppfb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vnnmd to functional-755106
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-755106 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-755106 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-mgr4f" [58b5df24-dd33-4a37-8526-fef0d53cab69] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1115 09:45:29.824229  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:57.527628  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:50:29.824425  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-755106 -n functional-755106
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-15 09:53:20.349776064 +0000 UTC m=+1244.592230027
functional_test.go:1460: (dbg) Run:  kubectl --context functional-755106 describe po hello-node-75c85bcc94-mgr4f -n default
functional_test.go:1460: (dbg) kubectl --context functional-755106 describe po hello-node-75c85bcc94-mgr4f -n default:
Name:             hello-node-75c85bcc94-mgr4f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-755106/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:43:19 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kbstb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kbstb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-mgr4f to functional-755106
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-755106 logs hello-node-75c85bcc94-mgr4f -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-755106 logs hello-node-75c85bcc94-mgr4f -n default: exit status 1 (107.807841ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-mgr4f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-755106 logs hello-node-75c85bcc94-mgr4f -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 service --namespace=default --https --url hello-node: exit status 115 (488.083061ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31435
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-755106 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 service hello-node --url --format={{.IP}}: exit status 115 (595.183306ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-755106 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 service hello-node --url: exit status 115 (512.110138ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31435
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-755106 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31435
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image load --daemon kicbase/echo-server:functional-755106 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-755106" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image load --daemon kicbase/echo-server:functional-755106 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-755106" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-755106
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image load --daemon kicbase/echo-server:functional-755106 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-755106" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image save kicbase/echo-server:functional-755106 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1115 09:53:32.547695  545343 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:53:32.548448  545343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:32.548473  545343 out.go:374] Setting ErrFile to fd 2...
	I1115 09:53:32.548479  545343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:32.548757  545343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:53:32.549579  545343 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:53:32.549743  545343 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:53:32.550522  545343 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
	I1115 09:53:32.579411  545343 ssh_runner.go:195] Run: systemctl --version
	I1115 09:53:32.579471  545343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
	I1115 09:53:32.598648  545343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
	I1115 09:53:32.704900  545343 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1115 09:53:32.704949  545343 cache_images.go:255] Failed to load cached images for "functional-755106": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1115 09:53:32.704966  545343 cache_images.go:267] failed pushing to: functional-755106

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-755106
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image save --daemon kicbase/echo-server:functional-755106 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-755106
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-755106: exit status 1 (21.470574ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-755106

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-755106

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-187814 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-187814 --output=json --user=testUser: exit status 80 (1.84438517s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"70b5d6c8-ceb2-4ce9-beba-c292c1705271","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-187814 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0f67e5ac-da7e-42d5-ab76-be9aebd94f33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-15T10:06:19Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"5223da13-c6d5-4563-a708-042078517158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-187814 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.85s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.85s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-187814 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-187814 --output=json --user=testUser: exit status 80 (1.847555507s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56c521a9-ce1b-4183-9dc5-343b7e0f5fe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-187814 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"da6ba97f-f2ba-41e7-92fd-fec6ae6a650f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-15T10:06:21Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"a3d2b516-f220-49cf-9189-e0d6f9d01027","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-187814 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.85s)

                                                
                                    
x
+
TestPause/serial/Pause (7.28s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-742370 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-742370 --alsologtostderr -v=5: exit status 80 (2.437239697s)

                                                
                                                
-- stdout --
	* Pausing node pause-742370 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:28:20.661824  678684 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:28:20.663181  678684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:28:20.663198  678684 out.go:374] Setting ErrFile to fd 2...
	I1115 10:28:20.663204  678684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:28:20.663483  678684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:28:20.663749  678684 out.go:368] Setting JSON to false
	I1115 10:28:20.663778  678684 mustload.go:66] Loading cluster: pause-742370
	I1115 10:28:20.664222  678684 config.go:182] Loaded profile config "pause-742370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:28:20.664660  678684 cli_runner.go:164] Run: docker container inspect pause-742370 --format={{.State.Status}}
	I1115 10:28:20.681921  678684 host.go:66] Checking if "pause-742370" exists ...
	I1115 10:28:20.682228  678684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:28:20.743177  678684 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 10:28:20.732576599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:28:20.743924  678684 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-742370 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:28:20.748829  678684 out.go:179] * Pausing node pause-742370 ... 
	I1115 10:28:20.751776  678684 host.go:66] Checking if "pause-742370" exists ...
	I1115 10:28:20.752141  678684 ssh_runner.go:195] Run: systemctl --version
	I1115 10:28:20.752189  678684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:28:20.769038  678684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:28:20.876049  678684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:28:20.888794  678684 pause.go:52] kubelet running: true
	I1115 10:28:20.888869  678684 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:28:21.125077  678684 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:28:21.125233  678684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:28:21.191463  678684 cri.go:89] found id: "288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555"
	I1115 10:28:21.191487  678684 cri.go:89] found id: "05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f"
	I1115 10:28:21.191493  678684 cri.go:89] found id: "eca10abde8316621413e98548b270d2b5740f5e6c6f2e387403a71e8813355f1"
	I1115 10:28:21.191496  678684 cri.go:89] found id: "c8e184bde3a719ef981434aa1018993df941ba79e4a11831f1d812e69b3afee4"
	I1115 10:28:21.191500  678684 cri.go:89] found id: "310bc98f84eb9a13685d56f25f40ae6ee2024ca6e91383fa77e34b73a5d1ccdd"
	I1115 10:28:21.191503  678684 cri.go:89] found id: "66f491c30963d573277c592e9b2e25156e8a6d81a7cc25ce3ca261987f5ebf0e"
	I1115 10:28:21.191506  678684 cri.go:89] found id: "f87e275e82af871d828ba41fd96a95ab60f2bc5453c125ca1e11b2196c628dfa"
	I1115 10:28:21.191509  678684 cri.go:89] found id: "ab0f0a81ef6eded6ffef0d3661cdcd2c942b9b2627c3c49c8cdd8a50142ef602"
	I1115 10:28:21.191511  678684 cri.go:89] found id: "772520002a0d2e732b637bacc7f3b9a571c747d1a191d27acbd9c780593f38eb"
	I1115 10:28:21.191517  678684 cri.go:89] found id: "894151757a42063aa61216c10db7008aa192a0016f7f781aa5e705f8b3c03186"
	I1115 10:28:21.191520  678684 cri.go:89] found id: "6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	I1115 10:28:21.191523  678684 cri.go:89] found id: "72a49dbd30345812d15352a78f21786b46bee08f9d96da2318bdfc3460699228"
	I1115 10:28:21.191526  678684 cri.go:89] found id: "760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	I1115 10:28:21.191529  678684 cri.go:89] found id: "cfb4b2c9e131375c52820201cc2aa616b3aafdb5d46b65406a8b2f3f47825184"
	I1115 10:28:21.191533  678684 cri.go:89] found id: "dcfbe82e3ea3a44db46570ac44e74c82ad7b305d8adb4a6b172d8ecfc030a792"
	I1115 10:28:21.191537  678684 cri.go:89] found id: "5677ea4b60fedf14359a3a400fb67e5d5e7b24d88130a022cc787f789cac5ddf"
	I1115 10:28:21.191541  678684 cri.go:89] found id: ""
	I1115 10:28:21.191598  678684 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:28:21.202776  678684 retry.go:31] will retry after 331.811879ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:28:21Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:28:21.535180  678684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:28:21.548122  678684 pause.go:52] kubelet running: false
	I1115 10:28:21.548189  678684 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:28:21.700806  678684 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:28:21.700900  678684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:28:21.769869  678684 cri.go:89] found id: "288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555"
	I1115 10:28:21.769891  678684 cri.go:89] found id: "05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f"
	I1115 10:28:21.769909  678684 cri.go:89] found id: "eca10abde8316621413e98548b270d2b5740f5e6c6f2e387403a71e8813355f1"
	I1115 10:28:21.769913  678684 cri.go:89] found id: "c8e184bde3a719ef981434aa1018993df941ba79e4a11831f1d812e69b3afee4"
	I1115 10:28:21.769917  678684 cri.go:89] found id: "310bc98f84eb9a13685d56f25f40ae6ee2024ca6e91383fa77e34b73a5d1ccdd"
	I1115 10:28:21.769920  678684 cri.go:89] found id: "66f491c30963d573277c592e9b2e25156e8a6d81a7cc25ce3ca261987f5ebf0e"
	I1115 10:28:21.769923  678684 cri.go:89] found id: "f87e275e82af871d828ba41fd96a95ab60f2bc5453c125ca1e11b2196c628dfa"
	I1115 10:28:21.769926  678684 cri.go:89] found id: "ab0f0a81ef6eded6ffef0d3661cdcd2c942b9b2627c3c49c8cdd8a50142ef602"
	I1115 10:28:21.769929  678684 cri.go:89] found id: "772520002a0d2e732b637bacc7f3b9a571c747d1a191d27acbd9c780593f38eb"
	I1115 10:28:21.769939  678684 cri.go:89] found id: "894151757a42063aa61216c10db7008aa192a0016f7f781aa5e705f8b3c03186"
	I1115 10:28:21.769946  678684 cri.go:89] found id: "6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	I1115 10:28:21.769949  678684 cri.go:89] found id: "72a49dbd30345812d15352a78f21786b46bee08f9d96da2318bdfc3460699228"
	I1115 10:28:21.769956  678684 cri.go:89] found id: "760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	I1115 10:28:21.769959  678684 cri.go:89] found id: "cfb4b2c9e131375c52820201cc2aa616b3aafdb5d46b65406a8b2f3f47825184"
	I1115 10:28:21.769963  678684 cri.go:89] found id: "dcfbe82e3ea3a44db46570ac44e74c82ad7b305d8adb4a6b172d8ecfc030a792"
	I1115 10:28:21.769968  678684 cri.go:89] found id: "5677ea4b60fedf14359a3a400fb67e5d5e7b24d88130a022cc787f789cac5ddf"
	I1115 10:28:21.769982  678684 cri.go:89] found id: ""
	I1115 10:28:21.770029  678684 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:28:21.780564  678684 retry.go:31] will retry after 347.642772ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:28:21Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:28:22.128812  678684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:28:22.145773  678684 pause.go:52] kubelet running: false
	I1115 10:28:22.145841  678684 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:28:22.357475  678684 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:28:22.357553  678684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:28:22.427386  678684 cri.go:89] found id: "288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555"
	I1115 10:28:22.427406  678684 cri.go:89] found id: "05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f"
	I1115 10:28:22.427412  678684 cri.go:89] found id: "eca10abde8316621413e98548b270d2b5740f5e6c6f2e387403a71e8813355f1"
	I1115 10:28:22.427416  678684 cri.go:89] found id: "c8e184bde3a719ef981434aa1018993df941ba79e4a11831f1d812e69b3afee4"
	I1115 10:28:22.427419  678684 cri.go:89] found id: "310bc98f84eb9a13685d56f25f40ae6ee2024ca6e91383fa77e34b73a5d1ccdd"
	I1115 10:28:22.427422  678684 cri.go:89] found id: "66f491c30963d573277c592e9b2e25156e8a6d81a7cc25ce3ca261987f5ebf0e"
	I1115 10:28:22.427425  678684 cri.go:89] found id: "f87e275e82af871d828ba41fd96a95ab60f2bc5453c125ca1e11b2196c628dfa"
	I1115 10:28:22.427428  678684 cri.go:89] found id: "ab0f0a81ef6eded6ffef0d3661cdcd2c942b9b2627c3c49c8cdd8a50142ef602"
	I1115 10:28:22.427431  678684 cri.go:89] found id: "772520002a0d2e732b637bacc7f3b9a571c747d1a191d27acbd9c780593f38eb"
	I1115 10:28:22.427445  678684 cri.go:89] found id: "894151757a42063aa61216c10db7008aa192a0016f7f781aa5e705f8b3c03186"
	I1115 10:28:22.427449  678684 cri.go:89] found id: "6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	I1115 10:28:22.427452  678684 cri.go:89] found id: "72a49dbd30345812d15352a78f21786b46bee08f9d96da2318bdfc3460699228"
	I1115 10:28:22.427459  678684 cri.go:89] found id: "760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	I1115 10:28:22.427462  678684 cri.go:89] found id: "cfb4b2c9e131375c52820201cc2aa616b3aafdb5d46b65406a8b2f3f47825184"
	I1115 10:28:22.427465  678684 cri.go:89] found id: "dcfbe82e3ea3a44db46570ac44e74c82ad7b305d8adb4a6b172d8ecfc030a792"
	I1115 10:28:22.427471  678684 cri.go:89] found id: "5677ea4b60fedf14359a3a400fb67e5d5e7b24d88130a022cc787f789cac5ddf"
	I1115 10:28:22.427474  678684 cri.go:89] found id: ""
	I1115 10:28:22.427520  678684 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:28:22.440396  678684 retry.go:31] will retry after 314.17098ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:28:22Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:28:22.754798  678684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:28:22.768168  678684 pause.go:52] kubelet running: false
	I1115 10:28:22.768238  678684 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:28:22.943235  678684 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:28:22.943362  678684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:28:23.013326  678684 cri.go:89] found id: "288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555"
	I1115 10:28:23.013352  678684 cri.go:89] found id: "05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f"
	I1115 10:28:23.013359  678684 cri.go:89] found id: "eca10abde8316621413e98548b270d2b5740f5e6c6f2e387403a71e8813355f1"
	I1115 10:28:23.013362  678684 cri.go:89] found id: "c8e184bde3a719ef981434aa1018993df941ba79e4a11831f1d812e69b3afee4"
	I1115 10:28:23.013366  678684 cri.go:89] found id: "310bc98f84eb9a13685d56f25f40ae6ee2024ca6e91383fa77e34b73a5d1ccdd"
	I1115 10:28:23.013370  678684 cri.go:89] found id: "66f491c30963d573277c592e9b2e25156e8a6d81a7cc25ce3ca261987f5ebf0e"
	I1115 10:28:23.013373  678684 cri.go:89] found id: "f87e275e82af871d828ba41fd96a95ab60f2bc5453c125ca1e11b2196c628dfa"
	I1115 10:28:23.013378  678684 cri.go:89] found id: "ab0f0a81ef6eded6ffef0d3661cdcd2c942b9b2627c3c49c8cdd8a50142ef602"
	I1115 10:28:23.013381  678684 cri.go:89] found id: "772520002a0d2e732b637bacc7f3b9a571c747d1a191d27acbd9c780593f38eb"
	I1115 10:28:23.013388  678684 cri.go:89] found id: "894151757a42063aa61216c10db7008aa192a0016f7f781aa5e705f8b3c03186"
	I1115 10:28:23.013391  678684 cri.go:89] found id: "6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	I1115 10:28:23.013395  678684 cri.go:89] found id: "72a49dbd30345812d15352a78f21786b46bee08f9d96da2318bdfc3460699228"
	I1115 10:28:23.013403  678684 cri.go:89] found id: "760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	I1115 10:28:23.013407  678684 cri.go:89] found id: "cfb4b2c9e131375c52820201cc2aa616b3aafdb5d46b65406a8b2f3f47825184"
	I1115 10:28:23.013409  678684 cri.go:89] found id: "dcfbe82e3ea3a44db46570ac44e74c82ad7b305d8adb4a6b172d8ecfc030a792"
	I1115 10:28:23.013414  678684 cri.go:89] found id: "5677ea4b60fedf14359a3a400fb67e5d5e7b24d88130a022cc787f789cac5ddf"
	I1115 10:28:23.013417  678684 cri.go:89] found id: ""
	I1115 10:28:23.013472  678684 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:28:23.028031  678684 out.go:203] 
	W1115 10:28:23.030898  678684 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:28:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:28:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:28:23.030916  678684 out.go:285] * 
	* 
	W1115 10:28:23.038300  678684 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:28:23.041220  678684 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-742370 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-742370
helpers_test.go:243: (dbg) docker inspect pause-742370:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470",
	        "Created": "2025-11-15T10:26:18.346213649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 672065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:26:18.416970193Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470/hostname",
	        "HostsPath": "/var/lib/docker/containers/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470/hosts",
	        "LogPath": "/var/lib/docker/containers/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470-json.log",
	        "Name": "/pause-742370",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-742370:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-742370",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470",
	                "LowerDir": "/var/lib/docker/overlay2/852d13fe0c28782adca5e9b489255a8f968a6cca4a54a091f2b730a05b0e919f-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/852d13fe0c28782adca5e9b489255a8f968a6cca4a54a091f2b730a05b0e919f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/852d13fe0c28782adca5e9b489255a8f968a6cca4a54a091f2b730a05b0e919f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/852d13fe0c28782adca5e9b489255a8f968a6cca4a54a091f2b730a05b0e919f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-742370",
	                "Source": "/var/lib/docker/volumes/pause-742370/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-742370",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-742370",
	                "name.minikube.sigs.k8s.io": "pause-742370",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f0d71bfbe60f0052cb05247de54c2b9acf37b1b42e2af5cb4f76d1093fcc5e70",
	            "SandboxKey": "/var/run/docker/netns/f0d71bfbe60f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33754"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33755"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33758"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33756"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33757"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-742370": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:ba:5c:c2:8d:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b743444ceeefc4c7ce9c0bb010c6f891ec5bc462da4ae50a3f11b35777d6b156",
	                    "EndpointID": "ea61f56dc674e4bf4509d2f02eac15b1b3df9a5ba0ca3c4e7c430c20a6675322",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-742370",
	                        "cf89b6cf4733"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-742370 -n pause-742370
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-742370 -n pause-742370: exit status 2 (352.142519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-742370 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-742370 logs -n 25: (1.450574389s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-759398 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:22 UTC │ 15 Nov 25 10:22 UTC │
	│ start   │ -p missing-upgrade-372439 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-372439    │ jenkins │ v1.32.0 │ 15 Nov 25 10:22 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-759398 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:22 UTC │ 15 Nov 25 10:23 UTC │
	│ delete  │ -p NoKubernetes-759398                                                                                                                   │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-759398 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p missing-upgrade-372439 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-372439    │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:24 UTC │
	│ ssh     │ -p NoKubernetes-759398 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │                     │
	│ stop    │ -p NoKubernetes-759398                                                                                                                   │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-759398 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ ssh     │ -p NoKubernetes-759398 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │                     │
	│ delete  │ -p NoKubernetes-759398                                                                                                                   │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:24 UTC │
	│ delete  │ -p missing-upgrade-372439                                                                                                                │ missing-upgrade-372439    │ jenkins │ v1.37.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:24 UTC │
	│ start   │ -p stopped-upgrade-063492 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-063492    │ jenkins │ v1.32.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:24 UTC │
	│ stop    │ -p kubernetes-upgrade-480353                                                                                                             │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:24 UTC │
	│ start   │ -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:24 UTC │                     │
	│ stop    │ stopped-upgrade-063492 stop                                                                                                              │ stopped-upgrade-063492    │ jenkins │ v1.32.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:24 UTC │
	│ start   │ -p stopped-upgrade-063492 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-063492    │ jenkins │ v1.37.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:25 UTC │
	│ delete  │ -p stopped-upgrade-063492                                                                                                                │ stopped-upgrade-063492    │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:25 UTC │
	│ start   │ -p running-upgrade-528342 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-528342    │ jenkins │ v1.32.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:25 UTC │
	│ start   │ -p running-upgrade-528342 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-528342    │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:26 UTC │
	│ delete  │ -p running-upgrade-528342                                                                                                                │ running-upgrade-528342    │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │ 15 Nov 25 10:26 UTC │
	│ start   │ -p pause-742370 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-742370              │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │ 15 Nov 25 10:27 UTC │
	│ start   │ -p pause-742370 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-742370              │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:28 UTC │
	│ pause   │ -p pause-742370 --alsologtostderr -v=5                                                                                                   │ pause-742370              │ jenkins │ v1.37.0 │ 15 Nov 25 10:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:27:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:27:36.855187  676403 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:27:36.855352  676403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:36.855362  676403 out.go:374] Setting ErrFile to fd 2...
	I1115 10:27:36.855368  676403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:36.855702  676403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:27:36.856104  676403 out.go:368] Setting JSON to false
	I1115 10:27:36.857140  676403 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18608,"bootTime":1763183849,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:27:36.857213  676403 start.go:143] virtualization:  
	I1115 10:27:36.860110  676403 out.go:179] * [pause-742370] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:27:36.863986  676403 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:27:36.864094  676403 notify.go:221] Checking for updates...
	I1115 10:27:36.870109  676403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:27:36.873080  676403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:27:36.876053  676403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:27:36.878981  676403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:27:36.881937  676403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:27:36.885300  676403 config.go:182] Loaded profile config "pause-742370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:36.885969  676403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:27:36.917761  676403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:27:36.918034  676403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:27:36.984210  676403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 10:27:36.974482821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:27:36.984323  676403 docker.go:319] overlay module found
	I1115 10:27:36.987310  676403 out.go:179] * Using the docker driver based on existing profile
	I1115 10:27:36.990180  676403 start.go:309] selected driver: docker
	I1115 10:27:36.990204  676403 start.go:930] validating driver "docker" against &{Name:pause-742370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:36.990340  676403 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:27:36.990461  676403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:27:37.051488  676403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 10:27:37.042579815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:27:37.051994  676403 cni.go:84] Creating CNI manager for ""
	I1115 10:27:37.052058  676403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:37.052111  676403 start.go:353] cluster config:
	{Name:pause-742370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:37.057197  676403 out.go:179] * Starting "pause-742370" primary control-plane node in "pause-742370" cluster
	I1115 10:27:37.060052  676403 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:27:37.062962  676403 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:27:37.065784  676403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:27:37.065836  676403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:27:37.065852  676403 cache.go:65] Caching tarball of preloaded images
	I1115 10:27:37.065890  676403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:27:37.065938  676403 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:27:37.065948  676403 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:27:37.066099  676403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/config.json ...
	I1115 10:27:37.086102  676403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:27:37.086125  676403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:27:37.086145  676403 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:27:37.086168  676403 start.go:360] acquireMachinesLock for pause-742370: {Name:mke364f2e8b67d701cb09d47fae8f68eed7d5351 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:27:37.086224  676403 start.go:364] duration metric: took 35.864µs to acquireMachinesLock for "pause-742370"
	I1115 10:27:37.086249  676403 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:27:37.086259  676403 fix.go:54] fixHost starting: 
	I1115 10:27:37.086526  676403 cli_runner.go:164] Run: docker container inspect pause-742370 --format={{.State.Status}}
	I1115 10:27:37.104475  676403 fix.go:112] recreateIfNeeded on pause-742370: state=Running err=<nil>
	W1115 10:27:37.104511  676403 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:27:38.365820  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:38.366247  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:27:38.366295  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:27:38.366356  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:27:38.394603  661959 cri.go:89] found id: "ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:27:38.394633  661959 cri.go:89] found id: ""
	I1115 10:27:38.394641  661959 logs.go:282] 1 containers: [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1]
	I1115 10:27:38.394697  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:38.398431  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:27:38.398501  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:27:38.434175  661959 cri.go:89] found id: ""
	I1115 10:27:38.434201  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.434210  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:27:38.434217  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:27:38.434274  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:27:38.478808  661959 cri.go:89] found id: ""
	I1115 10:27:38.478839  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.478848  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:27:38.478857  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:27:38.478915  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:27:38.521824  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:27:38.521847  661959 cri.go:89] found id: ""
	I1115 10:27:38.521865  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:27:38.521928  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:38.526273  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:27:38.526348  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:27:38.567437  661959 cri.go:89] found id: ""
	I1115 10:27:38.567465  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.567473  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:27:38.567479  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:27:38.567536  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:27:38.595104  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:27:38.595127  661959 cri.go:89] found id: ""
	I1115 10:27:38.595135  661959 logs.go:282] 1 containers: [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:27:38.595188  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:38.599537  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:27:38.599606  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:27:38.637115  661959 cri.go:89] found id: ""
	I1115 10:27:38.637206  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.637234  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:27:38.637270  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:27:38.637369  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:27:38.668663  661959 cri.go:89] found id: ""
	I1115 10:27:38.668730  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.668742  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:27:38.668751  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:27:38.668762  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:27:38.727300  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:27:38.727336  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:27:38.752479  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:27:38.752515  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:27:38.808943  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:27:38.808979  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:27:38.837970  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:27:38.838001  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:27:38.958768  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:27:38.958807  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:27:38.977499  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:27:38.977530  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:27:39.045269  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:27:39.045290  661959 logs.go:123] Gathering logs for kube-apiserver [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1] ...
	I1115 10:27:39.045305  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:27:37.107750  676403 out.go:252] * Updating the running docker "pause-742370" container ...
	I1115 10:27:37.107791  676403 machine.go:94] provisionDockerMachine start ...
	I1115 10:27:37.107894  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:37.124845  676403 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:37.125156  676403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33754 <nil> <nil>}
	I1115 10:27:37.125173  676403 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:27:37.281565  676403 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-742370
	
	I1115 10:27:37.281587  676403 ubuntu.go:182] provisioning hostname "pause-742370"
	I1115 10:27:37.281706  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:37.309778  676403 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:37.310094  676403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33754 <nil> <nil>}
	I1115 10:27:37.310111  676403 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-742370 && echo "pause-742370" | sudo tee /etc/hostname
	I1115 10:27:37.474557  676403 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-742370
	
	I1115 10:27:37.474632  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:37.493128  676403 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:37.493442  676403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33754 <nil> <nil>}
	I1115 10:27:37.493465  676403 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-742370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-742370/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-742370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:27:37.646069  676403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:27:37.646092  676403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:27:37.646110  676403 ubuntu.go:190] setting up certificates
	I1115 10:27:37.646120  676403 provision.go:84] configureAuth start
	I1115 10:27:37.646177  676403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-742370
	I1115 10:27:37.663872  676403 provision.go:143] copyHostCerts
	I1115 10:27:37.663952  676403 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:27:37.663967  676403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:27:37.664042  676403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:27:37.664140  676403 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:27:37.664146  676403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:27:37.664176  676403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:27:37.664223  676403 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:27:37.664228  676403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:27:37.664249  676403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:27:37.664296  676403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.pause-742370 san=[127.0.0.1 192.168.85.2 localhost minikube pause-742370]
	I1115 10:27:38.050246  676403 provision.go:177] copyRemoteCerts
	I1115 10:27:38.050350  676403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:27:38.050402  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:38.070099  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:38.178329  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:27:38.196071  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 10:27:38.226732  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:27:38.245203  676403 provision.go:87] duration metric: took 599.059835ms to configureAuth
	I1115 10:27:38.245231  676403 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:27:38.245457  676403 config.go:182] Loaded profile config "pause-742370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:38.245565  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:38.266307  676403 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:38.266620  676403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33754 <nil> <nil>}
	I1115 10:27:38.266641  676403 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:27:41.577436  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:43.606938  676403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:27:43.606961  676403 machine.go:97] duration metric: took 6.49915394s to provisionDockerMachine
	I1115 10:27:43.606972  676403 start.go:293] postStartSetup for "pause-742370" (driver="docker")
	I1115 10:27:43.606983  676403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:27:43.607048  676403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:27:43.607101  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:43.628070  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:43.734137  676403 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:27:43.737547  676403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:27:43.737578  676403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:27:43.737590  676403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:27:43.737674  676403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:27:43.737772  676403 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:27:43.737876  676403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:27:43.745305  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:27:43.763353  676403 start.go:296] duration metric: took 156.364146ms for postStartSetup
	I1115 10:27:43.763432  676403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:27:43.763477  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:43.780920  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:43.887164  676403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:27:43.892905  676403 fix.go:56] duration metric: took 6.80663736s for fixHost
	I1115 10:27:43.892930  676403 start.go:83] releasing machines lock for "pause-742370", held for 6.806692974s
	I1115 10:27:43.893013  676403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-742370
	I1115 10:27:43.909703  676403 ssh_runner.go:195] Run: cat /version.json
	I1115 10:27:43.909761  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:43.909853  676403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:27:43.909936  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:43.932441  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:43.950444  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:44.041299  676403 ssh_runner.go:195] Run: systemctl --version
	I1115 10:27:44.138243  676403 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:27:44.180266  676403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:27:44.184974  676403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:27:44.185083  676403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:27:44.193047  676403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:27:44.193072  676403 start.go:496] detecting cgroup driver to use...
	I1115 10:27:44.193106  676403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:27:44.193159  676403 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:27:44.209183  676403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:27:44.222357  676403 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:27:44.222458  676403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:27:44.238486  676403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:27:44.251673  676403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:27:44.408485  676403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:27:44.648015  676403 docker.go:234] disabling docker service ...
	I1115 10:27:44.648118  676403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:27:44.682172  676403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:27:44.710509  676403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:27:44.967610  676403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:27:45.271336  676403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:27:45.293432  676403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:27:45.320935  676403 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:27:45.321039  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.337196  676403 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:27:45.337284  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.347384  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.362881  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.379899  676403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:27:45.393320  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.420737  676403 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.452318  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.471238  676403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:27:45.486867  676403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:27:45.498953  676403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:45.727501  676403 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:27:46.578210  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1115 10:27:46.578269  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:27:46.578345  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:27:46.604540  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:27:46.604564  661959 cri.go:89] found id: "ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:27:46.604570  661959 cri.go:89] found id: ""
	I1115 10:27:46.604577  661959 logs.go:282] 2 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1]
	I1115 10:27:46.604635  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:46.608309  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:46.611800  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:27:46.611873  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:27:46.639606  661959 cri.go:89] found id: ""
	I1115 10:27:46.639632  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.639641  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:27:46.639650  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:27:46.639763  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:27:46.668414  661959 cri.go:89] found id: ""
	I1115 10:27:46.668440  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.668449  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:27:46.668455  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:27:46.668514  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:27:46.695459  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:27:46.695483  661959 cri.go:89] found id: ""
	I1115 10:27:46.695492  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:27:46.695546  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:46.699223  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:27:46.699293  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:27:46.724723  661959 cri.go:89] found id: ""
	I1115 10:27:46.724746  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.724754  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:27:46.724761  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:27:46.724819  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:27:46.751093  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:27:46.751114  661959 cri.go:89] found id: ""
	I1115 10:27:46.751123  661959 logs.go:282] 1 containers: [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:27:46.751177  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:46.754854  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:27:46.754923  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:27:46.780259  661959 cri.go:89] found id: ""
	I1115 10:27:46.780285  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.780294  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:27:46.780300  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:27:46.780358  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:27:46.812599  661959 cri.go:89] found id: ""
	I1115 10:27:46.812625  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.812634  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:27:46.812648  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:27:46.812660  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:27:46.931609  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:27:46.931648  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1115 10:27:55.180781  676403 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.453243052s)
	I1115 10:27:55.180804  676403 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:27:55.180870  676403 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:27:55.190541  676403 start.go:564] Will wait 60s for crictl version
	I1115 10:27:55.190622  676403 ssh_runner.go:195] Run: which crictl
	I1115 10:27:55.194809  676403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:27:55.219911  676403 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:27:55.220002  676403 ssh_runner.go:195] Run: crio --version
	I1115 10:27:55.251012  676403 ssh_runner.go:195] Run: crio --version
	I1115 10:27:55.281676  676403 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:27:55.284764  676403 cli_runner.go:164] Run: docker network inspect pause-742370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:55.300062  676403 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:27:55.304444  676403 kubeadm.go:884] updating cluster {Name:pause-742370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:27:55.304586  676403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:27:55.304643  676403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:55.340284  676403 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:55.340308  676403 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:27:55.340371  676403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:55.365962  676403 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:55.365985  676403 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:27:55.365994  676403 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1115 10:27:55.366092  676403 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-742370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:27:55.366175  676403 ssh_runner.go:195] Run: crio config
	I1115 10:27:55.428913  676403 cni.go:84] Creating CNI manager for ""
	I1115 10:27:55.428936  676403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:55.428959  676403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:27:55.428982  676403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-742370 NodeName:pause-742370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:27:55.429106  676403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-742370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:27:55.429181  676403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:27:55.437553  676403 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:27:55.437645  676403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:27:55.445218  676403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1115 10:27:55.457578  676403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:27:55.471610  676403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1115 10:27:55.484026  676403 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:27:55.488973  676403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:55.635797  676403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:55.649953  676403 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370 for IP: 192.168.85.2
	I1115 10:27:55.650031  676403 certs.go:195] generating shared ca certs ...
	I1115 10:27:55.650057  676403 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:55.650206  676403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:27:55.650260  676403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:27:55.650286  676403 certs.go:257] generating profile certs ...
	I1115 10:27:55.650383  676403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.key
	I1115 10:27:55.650450  676403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/apiserver.key.57edb4e6
	I1115 10:27:55.650529  676403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/proxy-client.key
	I1115 10:27:55.650640  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:27:55.650673  676403 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:27:55.650685  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:27:55.650708  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:27:55.650732  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:27:55.650758  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:27:55.650805  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:27:55.651383  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:27:55.669769  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:27:55.686571  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:27:55.704217  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:27:55.720625  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 10:27:55.737994  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:27:55.755106  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:27:55.772322  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:27:55.789192  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:27:55.806155  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:27:55.822821  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:27:55.848033  676403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:27:55.860488  676403 ssh_runner.go:195] Run: openssl version
	I1115 10:27:55.866743  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:27:55.875261  676403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:27:55.878860  676403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:27:55.878921  676403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:27:55.919448  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:27:55.927503  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:27:55.935935  676403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:55.939504  676403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:55.939579  676403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:55.981175  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:27:55.988970  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:27:55.997298  676403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:27:56.001888  676403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:27:56.002014  676403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:27:56.045408  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:27:56.053703  676403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:27:56.057549  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:27:56.098596  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:27:56.140675  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:27:56.181431  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:27:56.224057  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:27:56.264971  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:27:56.305796  676403 kubeadm.go:401] StartCluster: {Name:pause-742370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:56.305911  676403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:27:56.306006  676403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:27:56.334491  676403 cri.go:89] found id: "772520002a0d2e732b637bacc7f3b9a571c747d1a191d27acbd9c780593f38eb"
	I1115 10:27:56.334514  676403 cri.go:89] found id: "894151757a42063aa61216c10db7008aa192a0016f7f781aa5e705f8b3c03186"
	I1115 10:27:56.334521  676403 cri.go:89] found id: "6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	I1115 10:27:56.334525  676403 cri.go:89] found id: "72a49dbd30345812d15352a78f21786b46bee08f9d96da2318bdfc3460699228"
	I1115 10:27:56.334528  676403 cri.go:89] found id: "760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	I1115 10:27:56.334532  676403 cri.go:89] found id: "cfb4b2c9e131375c52820201cc2aa616b3aafdb5d46b65406a8b2f3f47825184"
	I1115 10:27:56.334535  676403 cri.go:89] found id: "dcfbe82e3ea3a44db46570ac44e74c82ad7b305d8adb4a6b172d8ecfc030a792"
	I1115 10:27:56.334562  676403 cri.go:89] found id: "5677ea4b60fedf14359a3a400fb67e5d5e7b24d88130a022cc787f789cac5ddf"
	I1115 10:27:56.334571  676403 cri.go:89] found id: "6f1a31e05552c254ec0bd0ee9f0c3d765425a121cef0b862f4c97573e8983092"
	I1115 10:27:56.334580  676403 cri.go:89] found id: "ea4fb980a515e467e33d46395276a789a4ece55251fb99babf679dbbab5e61ca"
	I1115 10:27:56.334583  676403 cri.go:89] found id: "d3edb77378f704636b6b906c0602b5fbbc83ab4dea8ada6a5cd4185864948c6c"
	I1115 10:27:56.334587  676403 cri.go:89] found id: "c64150522e89e835bdfa195e37765ff2ef4ae63ad8d7025bc5e5b9f075cb55af"
	I1115 10:27:56.334590  676403 cri.go:89] found id: "534661e04c63930425a4633a1e9e9ed45d5dfbe868b098444d1060b9a020af8f"
	I1115 10:27:56.334593  676403 cri.go:89] found id: "4602e2b05d93357c275e91842c3b8c26bcc12dff12d5c91834779995adcb7294"
	I1115 10:27:56.334599  676403 cri.go:89] found id: "f813afbe6ccd1ed2d7cea245b742dcfab91a1cdfaedbb9e40e4563ae9760c9f6"
	I1115 10:27:56.334605  676403 cri.go:89] found id: "ebfe8165e01e1232a7622dd6f871d26bdca77a08246dd8c8c9e76236d35743e0"
	I1115 10:27:56.334611  676403 cri.go:89] found id: ""
	I1115 10:27:56.334681  676403 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:27:56.346521  676403 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:27:56Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:27:56.346631  676403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:27:56.355302  676403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:27:56.355323  676403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:27:56.355398  676403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:27:56.363009  676403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:27:56.363653  676403 kubeconfig.go:125] found "pause-742370" server: "https://192.168.85.2:8443"
	I1115 10:27:56.364473  676403 kapi.go:59] client config for pause-742370: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.key", CAFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:27:56.364979  676403 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 10:27:56.365000  676403 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 10:27:56.365007  676403 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 10:27:56.365012  676403 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 10:27:56.365020  676403 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 10:27:56.365292  676403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:27:56.373373  676403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:27:56.373406  676403 kubeadm.go:602] duration metric: took 18.077128ms to restartPrimaryControlPlane
	I1115 10:27:56.373416  676403 kubeadm.go:403] duration metric: took 67.640692ms to StartCluster
	I1115 10:27:56.373435  676403 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:56.373499  676403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:27:56.374459  676403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:56.374725  676403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:56.375162  676403 config.go:182] Loaded profile config "pause-742370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:56.375247  676403 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:27:56.381185  676403 out.go:179] * Verifying Kubernetes components...
	I1115 10:27:56.381187  676403 out.go:179] * Enabled addons: 
	I1115 10:27:56.384020  676403 addons.go:515] duration metric: took 8.767671ms for enable addons: enabled=[]
	I1115 10:27:56.384070  676403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:56.531940  676403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:56.546817  676403 node_ready.go:35] waiting up to 6m0s for node "pause-742370" to be "Ready" ...
	I1115 10:27:57.005893  661959 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.074217346s)
	W1115 10:27:57.005947  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1115 10:27:57.005956  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:27:57.005967  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:27:57.032530  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:27:57.032556  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:27:57.093734  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:27:57.093772  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:27:57.129510  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:27:57.129537  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:27:57.148838  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:27:57.148869  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:27:57.183194  661959 logs.go:123] Gathering logs for kube-apiserver [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1] ...
	I1115 10:27:57.183228  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:27:57.219528  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:27:57.219560  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:27:59.778537  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:01.890243  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:41472->192.168.76.2:8443: read: connection reset by peer
	I1115 10:28:01.890328  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:01.890422  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:01.958927  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:01.958961  661959 cri.go:89] found id: "ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:28:01.958968  661959 cri.go:89] found id: ""
	I1115 10:28:01.958976  661959 logs.go:282] 2 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1]
	I1115 10:28:01.959078  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:01.963082  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:01.969216  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:01.969334  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:02.029344  661959 cri.go:89] found id: ""
	I1115 10:28:02.029381  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.029391  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:02.029397  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:02.029491  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:02.094011  661959 cri.go:89] found id: ""
	I1115 10:28:02.094047  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.094071  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:02.094082  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:02.094244  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:02.158235  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:02.158266  661959 cri.go:89] found id: ""
	I1115 10:28:02.158281  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:02.158378  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:02.163790  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:02.163911  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:02.225684  661959 cri.go:89] found id: ""
	I1115 10:28:02.225718  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.225727  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:02.225760  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:02.225843  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:02.273269  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:02.273292  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:02.273298  661959 cri.go:89] found id: ""
	I1115 10:28:02.273306  661959 logs.go:282] 2 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:28:02.273407  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:02.277479  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:02.285514  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:02.285639  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:02.329220  661959 cri.go:89] found id: ""
	I1115 10:28:02.329283  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.329298  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:02.329305  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:02.329383  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:02.383039  661959 cri.go:89] found id: ""
	I1115 10:28:02.383069  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.383078  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:02.383119  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:02.383139  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:02.436074  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:02.436101  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:28:02.580705  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:02.580763  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:02.606032  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:02.606103  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:02.737585  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:02.737682  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:02.737711  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:02.792824  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:28:02.792898  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:02.859114  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:02.859136  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:02.947920  661959 logs.go:123] Gathering logs for kube-apiserver [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1] ...
	I1115 10:28:02.948022  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	W1115 10:28:03.022106  661959 logs.go:130] failed kube-apiserver [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1": Process exited with status 1
	stdout:
	
	stderr:
	E1115 10:28:03.019254    4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1\": container with ID starting with ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1 not found: ID does not exist" containerID="ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	time="2025-11-15T10:28:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1\": container with ID starting with ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1115 10:28:03.019254    4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1\": container with ID starting with ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1 not found: ID does not exist" containerID="ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	time="2025-11-15T10:28:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1\": container with ID starting with ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1 not found: ID does not exist"
	
	** /stderr **
	I1115 10:28:03.022124  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:03.022142  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:03.107920  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:03.108001  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:05.648616  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:05.649020  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:05.649059  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:05.649118  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:05.676068  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:05.676090  661959 cri.go:89] found id: ""
	I1115 10:28:05.676097  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:05.676155  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:05.679850  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:05.679930  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:05.709032  661959 cri.go:89] found id: ""
	I1115 10:28:05.709061  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.709076  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:05.709083  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:05.709141  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:05.735315  661959 cri.go:89] found id: ""
	I1115 10:28:05.735341  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.735351  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:05.735357  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:05.735416  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:05.763120  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:05.763148  661959 cri.go:89] found id: ""
	I1115 10:28:05.763158  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:05.763228  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:05.767147  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:05.767223  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:05.794594  661959 cri.go:89] found id: ""
	I1115 10:28:05.794620  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.794629  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:05.794637  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:05.794726  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:05.827545  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:05.827568  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:05.827573  661959 cri.go:89] found id: ""
	I1115 10:28:05.827580  661959 logs.go:282] 2 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:28:05.827640  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:05.831659  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:05.835230  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:05.835299  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:05.866245  661959 cri.go:89] found id: ""
	I1115 10:28:05.866268  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.866276  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:05.866282  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:05.866347  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:05.892865  661959 cri.go:89] found id: ""
	I1115 10:28:05.892931  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.892954  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:05.892991  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:05.893022  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:05.911076  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:05.911107  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:05.948141  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:05.948174  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:06.028323  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:06.028362  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:06.077053  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:06.077082  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:28:06.213526  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:06.213619  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:06.290582  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:06.290605  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:06.290621  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:06.353623  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:06.353664  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:06.379765  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:28:06.379792  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:02.023267  676403 node_ready.go:49] node "pause-742370" is "Ready"
	I1115 10:28:02.023300  676403 node_ready.go:38] duration metric: took 5.476438461s for node "pause-742370" to be "Ready" ...
	I1115 10:28:02.023321  676403 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:28:02.023385  676403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:28:02.051298  676403 api_server.go:72] duration metric: took 5.676535508s to wait for apiserver process to appear ...
	I1115 10:28:02.051323  676403 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:28:02.051343  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:02.091049  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:28:02.091163  676403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:28:02.552293  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:02.574077  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:28:02.574122  676403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:28:03.051700  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:03.099371  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:28:03.099400  676403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:28:03.551999  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:03.569236  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:28:03.569261  676403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:28:04.051982  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:04.060178  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 10:28:04.061241  676403 api_server.go:141] control plane version: v1.34.1
	I1115 10:28:04.061267  676403 api_server.go:131] duration metric: took 2.009936241s to wait for apiserver health ...
	I1115 10:28:04.061277  676403 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:28:04.065867  676403 system_pods.go:59] 8 kube-system pods found
	I1115 10:28:04.065913  676403 system_pods.go:61] "coredns-66bc5c9577-55cnz" [23e8d4ed-4a7c-4411-a5ab-ecc48346820e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:28:04.065948  676403 system_pods.go:61] "coredns-66bc5c9577-pv4lm" [0743264a-d168-4c5f-ae23-90f5be7daea5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:28:04.065964  676403 system_pods.go:61] "etcd-pause-742370" [9739e739-7c92-445a-900f-865eb5f17743] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:28:04.065970  676403 system_pods.go:61] "kindnet-9xgvp" [8db4a3c3-62e1-41af-9e3a-84e123082d25] Running
	I1115 10:28:04.065980  676403 system_pods.go:61] "kube-apiserver-pause-742370" [6a524cfc-f284-49f7-aaab-599544ba7b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:28:04.065999  676403 system_pods.go:61] "kube-controller-manager-pause-742370" [2580948b-a3dc-4cd5-aaf9-0b5ac5d70aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:28:04.066004  676403 system_pods.go:61] "kube-proxy-mcjx7" [828ebe3d-841e-4dba-b3d3-924cc9a20bf4] Running
	I1115 10:28:04.066026  676403 system_pods.go:61] "kube-scheduler-pause-742370" [c32c0652-1422-4bab-a7e9-64386dd7550a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:28:04.066037  676403 system_pods.go:74] duration metric: took 4.735624ms to wait for pod list to return data ...
	I1115 10:28:04.066065  676403 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:28:04.068980  676403 default_sa.go:45] found service account: "default"
	I1115 10:28:04.069011  676403 default_sa.go:55] duration metric: took 2.938784ms for default service account to be created ...
	I1115 10:28:04.069022  676403 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:28:04.072306  676403 system_pods.go:86] 8 kube-system pods found
	I1115 10:28:04.072342  676403 system_pods.go:89] "coredns-66bc5c9577-55cnz" [23e8d4ed-4a7c-4411-a5ab-ecc48346820e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:28:04.072352  676403 system_pods.go:89] "coredns-66bc5c9577-pv4lm" [0743264a-d168-4c5f-ae23-90f5be7daea5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:28:04.072360  676403 system_pods.go:89] "etcd-pause-742370" [9739e739-7c92-445a-900f-865eb5f17743] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:28:04.072365  676403 system_pods.go:89] "kindnet-9xgvp" [8db4a3c3-62e1-41af-9e3a-84e123082d25] Running
	I1115 10:28:04.072371  676403 system_pods.go:89] "kube-apiserver-pause-742370" [6a524cfc-f284-49f7-aaab-599544ba7b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:28:04.072377  676403 system_pods.go:89] "kube-controller-manager-pause-742370" [2580948b-a3dc-4cd5-aaf9-0b5ac5d70aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:28:04.072388  676403 system_pods.go:89] "kube-proxy-mcjx7" [828ebe3d-841e-4dba-b3d3-924cc9a20bf4] Running
	I1115 10:28:04.072396  676403 system_pods.go:89] "kube-scheduler-pause-742370" [c32c0652-1422-4bab-a7e9-64386dd7550a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:28:04.072414  676403 system_pods.go:126] duration metric: took 3.386246ms to wait for k8s-apps to be running ...
	I1115 10:28:04.072422  676403 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:28:04.072478  676403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:28:04.085729  676403 system_svc.go:56] duration metric: took 13.296343ms WaitForService to wait for kubelet
	I1115 10:28:04.085808  676403 kubeadm.go:587] duration metric: took 7.711050549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:28:04.085834  676403 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:28:04.088804  676403 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:28:04.088876  676403 node_conditions.go:123] node cpu capacity is 2
	I1115 10:28:04.088895  676403 node_conditions.go:105] duration metric: took 3.054087ms to run NodePressure ...
	I1115 10:28:04.088909  676403 start.go:242] waiting for startup goroutines ...
	I1115 10:28:04.088916  676403 start.go:247] waiting for cluster config update ...
	I1115 10:28:04.088925  676403 start.go:256] writing updated cluster config ...
	I1115 10:28:04.089233  676403 ssh_runner.go:195] Run: rm -f paused
	I1115 10:28:04.093002  676403 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:28:04.093792  676403 kapi.go:59] client config for pause-742370: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.key", CAFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:28:04.097118  676403 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-55cnz" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:28:06.130351  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	I1115 10:28:08.926437  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:08.926956  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:08.927011  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:08.927081  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:08.966639  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:08.966661  661959 cri.go:89] found id: ""
	I1115 10:28:08.966670  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:08.966723  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:08.970508  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:08.970575  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:08.998980  661959 cri.go:89] found id: ""
	I1115 10:28:08.999005  661959 logs.go:282] 0 containers: []
	W1115 10:28:08.999023  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:08.999029  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:08.999088  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:09.030521  661959 cri.go:89] found id: ""
	I1115 10:28:09.030546  661959 logs.go:282] 0 containers: []
	W1115 10:28:09.030554  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:09.030561  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:09.030620  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:09.055614  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:09.055637  661959 cri.go:89] found id: ""
	I1115 10:28:09.055645  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:09.055702  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:09.059252  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:09.059324  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:09.088097  661959 cri.go:89] found id: ""
	I1115 10:28:09.088124  661959 logs.go:282] 0 containers: []
	W1115 10:28:09.088134  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:09.088144  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:09.088203  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:09.118860  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:09.118882  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:09.118888  661959 cri.go:89] found id: ""
	I1115 10:28:09.118895  661959 logs.go:282] 2 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:28:09.118971  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:09.122640  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:09.126084  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:09.126170  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:09.151459  661959 cri.go:89] found id: ""
	I1115 10:28:09.151521  661959 logs.go:282] 0 containers: []
	W1115 10:28:09.151539  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:09.151547  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:09.151607  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:09.175914  661959 cri.go:89] found id: ""
	I1115 10:28:09.175981  661959 logs.go:282] 0 containers: []
	W1115 10:28:09.175996  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:09.176012  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:09.176026  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:09.193522  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:09.193551  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:09.261265  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:09.261298  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:09.261318  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:09.322283  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:09.322321  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:09.350344  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:28:09.350373  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:09.377908  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:09.377938  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:09.446289  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:09.446325  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:09.481249  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:09.481278  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:09.514603  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:09.514638  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1115 10:28:08.603202  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	W1115 10:28:10.604201  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	I1115 10:28:12.136621  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:12.137069  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:12.137116  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:12.137182  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:12.164325  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:12.164396  661959 cri.go:89] found id: ""
	I1115 10:28:12.164420  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:12.164509  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:12.168263  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:12.168397  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:12.195238  661959 cri.go:89] found id: ""
	I1115 10:28:12.195264  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.195273  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:12.195280  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:12.195359  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:12.222866  661959 cri.go:89] found id: ""
	I1115 10:28:12.222898  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.222907  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:12.222914  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:12.223015  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:12.256665  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:12.256689  661959 cri.go:89] found id: ""
	I1115 10:28:12.256698  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:12.256775  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:12.260498  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:12.260575  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:12.289038  661959 cri.go:89] found id: ""
	I1115 10:28:12.289112  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.289138  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:12.289164  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:12.289253  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:12.316558  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:12.316635  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:12.316654  661959 cri.go:89] found id: ""
	I1115 10:28:12.316681  661959 logs.go:282] 2 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:28:12.316770  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:12.320522  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:12.323989  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:12.324065  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:12.349870  661959 cri.go:89] found id: ""
	I1115 10:28:12.349945  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.349982  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:12.350018  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:12.350111  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:12.376779  661959 cri.go:89] found id: ""
	I1115 10:28:12.376801  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.376809  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:12.376823  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:12.376836  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:12.394919  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:12.395004  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:12.465076  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:12.465140  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:12.465168  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:12.496317  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:28:12.496346  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:12.522219  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:12.522247  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:12.587519  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:12.587556  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:12.630888  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:12.630920  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:28:12.752824  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:12.752863  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:12.791436  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:12.791470  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:15.355127  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:15.355589  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:15.355655  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:15.355727  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:15.382489  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:15.382509  661959 cri.go:89] found id: ""
	I1115 10:28:15.382517  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:15.382570  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:15.386326  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:15.386446  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:15.415120  661959 cri.go:89] found id: ""
	I1115 10:28:15.415196  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.415221  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:15.415241  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:15.415322  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:15.442506  661959 cri.go:89] found id: ""
	I1115 10:28:15.442532  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.442542  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:15.442548  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:15.442666  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:15.471329  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:15.471360  661959 cri.go:89] found id: ""
	I1115 10:28:15.471369  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:15.471433  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:15.475454  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:15.475576  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:15.501796  661959 cri.go:89] found id: ""
	I1115 10:28:15.501859  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.501882  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:15.501908  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:15.501990  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:15.530764  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:15.530786  661959 cri.go:89] found id: ""
	I1115 10:28:15.530795  661959 logs.go:282] 1 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e]
	I1115 10:28:15.530875  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:15.534662  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:15.534746  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:15.575537  661959 cri.go:89] found id: ""
	I1115 10:28:15.575614  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.575637  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:15.575662  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:15.575773  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:15.613413  661959 cri.go:89] found id: ""
	I1115 10:28:15.613489  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.613513  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:15.613555  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:15.613587  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:15.661195  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:15.661225  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:28:15.795501  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:15.795542  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:15.814627  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:15.814657  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:15.890459  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:15.890477  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:15.890490  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:15.926113  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:15.926144  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:15.989083  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:15.989118  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:16.018212  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:16.018242  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1115 10:28:12.604274  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	W1115 10:28:15.104196  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	I1115 10:28:17.102098  676403 pod_ready.go:94] pod "coredns-66bc5c9577-55cnz" is "Ready"
	I1115 10:28:17.102123  676403 pod_ready.go:86] duration metric: took 13.004976684s for pod "coredns-66bc5c9577-55cnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:17.102133  676403 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pv4lm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.108336  676403 pod_ready.go:94] pod "coredns-66bc5c9577-pv4lm" is "Ready"
	I1115 10:28:19.108366  676403 pod_ready.go:86] duration metric: took 2.006225097s for pod "coredns-66bc5c9577-pv4lm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.111554  676403 pod_ready.go:83] waiting for pod "etcd-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.134365  676403 pod_ready.go:94] pod "etcd-pause-742370" is "Ready"
	I1115 10:28:19.134394  676403 pod_ready.go:86] duration metric: took 22.812317ms for pod "etcd-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.136940  676403 pod_ready.go:83] waiting for pod "kube-apiserver-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.145712  676403 pod_ready.go:94] pod "kube-apiserver-pause-742370" is "Ready"
	I1115 10:28:19.145734  676403 pod_ready.go:86] duration metric: took 8.773759ms for pod "kube-apiserver-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.147875  676403 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.306603  676403 pod_ready.go:94] pod "kube-controller-manager-pause-742370" is "Ready"
	I1115 10:28:19.306628  676403 pod_ready.go:86] duration metric: took 158.737296ms for pod "kube-controller-manager-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.506616  676403 pod_ready.go:83] waiting for pod "kube-proxy-mcjx7" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.906314  676403 pod_ready.go:94] pod "kube-proxy-mcjx7" is "Ready"
	I1115 10:28:19.906344  676403 pod_ready.go:86] duration metric: took 399.696357ms for pod "kube-proxy-mcjx7" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:20.106755  676403 pod_ready.go:83] waiting for pod "kube-scheduler-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:20.506909  676403 pod_ready.go:94] pod "kube-scheduler-pause-742370" is "Ready"
	I1115 10:28:20.506936  676403 pod_ready.go:86] duration metric: took 400.151951ms for pod "kube-scheduler-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:20.506950  676403 pod_ready.go:40] duration metric: took 16.413914858s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:28:20.562461  676403 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:28:20.565742  676403 out.go:179] * Done! kubectl is now configured to use "pause-742370" cluster and "default" namespace by default
	I1115 10:28:18.581688  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:18.582066  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:18.582115  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:18.582180  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:18.612297  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:18.612320  661959 cri.go:89] found id: ""
	I1115 10:28:18.612329  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:18.612391  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:18.616906  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:18.616985  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:18.656903  661959 cri.go:89] found id: ""
	I1115 10:28:18.656930  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.656939  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:18.656945  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:18.657010  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:18.700769  661959 cri.go:89] found id: ""
	I1115 10:28:18.700795  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.700804  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:18.700811  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:18.700868  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:18.735972  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:18.736007  661959 cri.go:89] found id: ""
	I1115 10:28:18.736016  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:18.736089  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:18.739903  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:18.740000  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:18.770126  661959 cri.go:89] found id: ""
	I1115 10:28:18.770192  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.770217  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:18.770232  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:18.770304  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:18.802946  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:18.802970  661959 cri.go:89] found id: ""
	I1115 10:28:18.802978  661959 logs.go:282] 1 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e]
	I1115 10:28:18.803032  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:18.807687  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:18.807748  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:18.850628  661959 cri.go:89] found id: ""
	I1115 10:28:18.850650  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.850659  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:18.850665  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:18.850726  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:18.882857  661959 cri.go:89] found id: ""
	I1115 10:28:18.882883  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.882891  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:18.882902  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:18.882914  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:18.901127  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:18.901157  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:18.972978  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:18.973000  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:18.973015  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:19.005828  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:19.005861  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:19.067952  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:19.067989  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:19.098878  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:19.098905  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:19.168404  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:19.168444  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:19.206317  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:19.206344  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.938648765Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.938669261Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.941729592Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.941760983Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.941782743Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.944690315Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.944729157Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.131129247Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=c40765e2-fe55-4584-81f7-be735eef6254 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.132548047Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=4db44e6c-2707-4074-9722-3949db96756a name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.133505453Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-55cnz/coredns" id=86c5a479-ee73-41e4-8f30-bbcac2c5fbf7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.13366407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.141998899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.142550744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.15692819Z" level=info msg="Created container 05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f: kube-system/coredns-66bc5c9577-55cnz/coredns" id=86c5a479-ee73-41e4-8f30-bbcac2c5fbf7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.159398276Z" level=info msg="Starting container: 05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f" id=24462274-297e-4b41-bea3-6d0ef424c9d3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.161109261Z" level=info msg="Started container" PID=2800 containerID=05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f description=kube-system/coredns-66bc5c9577-55cnz/coredns id=24462274-297e-4b41-bea3-6d0ef424c9d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ace949a8f9f625561d0aabe18c374c1c203fdec1de6ffcaec59e9116ce9e5239
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.131194105Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=1adbfd46-783e-475f-9f9f-0811394aa3ee name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.13232342Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=6439f208-63ce-4ec1-9c21-14fed45bf8cd name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.13319643Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-pv4lm/coredns" id=77cd8584-afe6-4af7-aedf-ce4a0cd55171 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.133307516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.13901817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.139522485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.155382434Z" level=info msg="Created container 288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555: kube-system/coredns-66bc5c9577-pv4lm/coredns" id=77cd8584-afe6-4af7-aedf-ce4a0cd55171 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.156091953Z" level=info msg="Starting container: 288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555" id=33860eca-d923-4ac7-949e-3f4327a6e76d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.158673322Z" level=info msg="Started container" PID=2815 containerID=288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555 description=kube-system/coredns-66bc5c9577-pv4lm/coredns id=33860eca-d923-4ac7-949e-3f4327a6e76d name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dfce00123902b453aba45be17c3f97aa9c20a67a89daa6e27a68232f0dbaa26
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	288a80c19f3a3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 seconds ago       Running             coredns                   2                   3dfce00123902       coredns-66bc5c9577-pv4lm               kube-system
	05efb4a261f9f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 seconds ago       Running             coredns                   2                   ace949a8f9f62       coredns-66bc5c9577-55cnz               kube-system
	eca10abde8316       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   26 seconds ago      Running             kindnet-cni               2                   770b17e152e33       kindnet-9xgvp                          kube-system
	c8e184bde3a71       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   26 seconds ago      Running             kube-scheduler            2                   acb7140270d9f       kube-scheduler-pause-742370            kube-system
	310bc98f84eb9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   26 seconds ago      Running             etcd                      2                   3a97e5158b975       etcd-pause-742370                      kube-system
	66f491c30963d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   26 seconds ago      Running             kube-apiserver            2                   4b4850d31d09b       kube-apiserver-pause-742370            kube-system
	f87e275e82af8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   26 seconds ago      Running             kube-controller-manager   2                   8e40c96fddb3b       kube-controller-manager-pause-742370   kube-system
	ab0f0a81ef6ed       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   26 seconds ago      Running             kube-proxy                2                   88c316f5bc9cc       kube-proxy-mcjx7                       kube-system
	772520002a0d2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   39 seconds ago      Exited              kube-proxy                1                   88c316f5bc9cc       kube-proxy-mcjx7                       kube-system
	894151757a420       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   39 seconds ago      Exited              kube-scheduler            1                   acb7140270d9f       kube-scheduler-pause-742370            kube-system
	6e9864194912a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   39 seconds ago      Exited              coredns                   1                   ace949a8f9f62       coredns-66bc5c9577-55cnz               kube-system
	72a49dbd30345       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   39 seconds ago      Exited              kindnet-cni               1                   770b17e152e33       kindnet-9xgvp                          kube-system
	760e182248368       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   39 seconds ago      Exited              coredns                   1                   3dfce00123902       coredns-66bc5c9577-pv4lm               kube-system
	cfb4b2c9e1313       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   39 seconds ago      Exited              kube-controller-manager   1                   8e40c96fddb3b       kube-controller-manager-pause-742370   kube-system
	dcfbe82e3ea3a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   39 seconds ago      Exited              kube-apiserver            1                   4b4850d31d09b       kube-apiserver-pause-742370            kube-system
	5677ea4b60fed       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   39 seconds ago      Exited              etcd                      1                   3a97e5158b975       etcd-pause-742370                      kube-system
	
	
	==> coredns [05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59118 - 63155 "HINFO IN 936987760029837079.1719399592185128055. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012876851s
	
	
	==> coredns [288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39201 - 43462 "HINFO IN 3376852173061605213.5254767342976220100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003502748s
	
	
	==> coredns [6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:36694 - 34092 "HINFO IN 2088388766527242553.822144519811294297. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.036133941s
	
	
	==> coredns [760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47103 - 49832 "HINFO IN 7731657668072876780.3955365230317501202. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012771976s
	
	
	==> describe nodes <==
	Name:               pause-742370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-742370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=pause-742370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_26_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:26:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-742370
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:28:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:28:12 +0000   Sat, 15 Nov 2025 10:26:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:28:12 +0000   Sat, 15 Nov 2025 10:26:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:28:12 +0000   Sat, 15 Nov 2025 10:26:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:28:12 +0000   Sat, 15 Nov 2025 10:27:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-742370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                e10bddd9-6d57-4527-9e98-28976bb9c4d7
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-55cnz                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     92s
	  kube-system                 coredns-66bc5c9577-pv4lm                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     92s
	  kube-system                 etcd-pause-742370                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         97s
	  kube-system                 kindnet-9xgvp                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      93s
	  kube-system                 kube-apiserver-pause-742370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-pause-742370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-mcjx7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-pause-742370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 91s                  kube-proxy       
	  Normal   Starting                 20s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node pause-742370 status is now: NodeHasSufficientMemory
	  Normal   Starting                 105s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 105s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node pause-742370 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x8 over 105s)  kubelet          Node pause-742370 status is now: NodeHasSufficientPID
	  Normal   Starting                 98s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 98s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  97s                  kubelet          Node pause-742370 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s                  kubelet          Node pause-742370 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s                  kubelet          Node pause-742370 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           94s                  node-controller  Node pause-742370 event: Registered Node pause-742370 in Controller
	  Normal   NodeReady                51s                  kubelet          Node pause-742370 status is now: NodeReady
	  Warning  ContainerGCFailed        38s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           18s                  node-controller  Node pause-742370 event: Registered Node pause-742370 in Controller
	
	
	==> dmesg <==
	[ +33.622137] overlayfs: idmapped layers are currently not supported
	[Nov15 10:01] overlayfs: idmapped layers are currently not supported
	[Nov15 10:02] overlayfs: idmapped layers are currently not supported
	[  +3.446621] overlayfs: idmapped layers are currently not supported
	[Nov15 10:03] overlayfs: idmapped layers are currently not supported
	[ +29.285636] overlayfs: idmapped layers are currently not supported
	[Nov15 10:05] overlayfs: idmapped layers are currently not supported
	[Nov15 10:09] overlayfs: idmapped layers are currently not supported
	[Nov15 10:10] overlayfs: idmapped layers are currently not supported
	[Nov15 10:11] overlayfs: idmapped layers are currently not supported
	[Nov15 10:12] overlayfs: idmapped layers are currently not supported
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [310bc98f84eb9a13685d56f25f40ae6ee2024ca6e91383fa77e34b73a5d1ccdd] <==
	{"level":"warn","ts":"2025-11-15T10:27:59.662086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.692551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.706517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.766526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.772883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.798745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.813752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.831925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.854057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.872480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.902774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.912976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.947286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.954798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.985226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.051011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.055264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.074640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.132278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.166788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.264887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.294708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.333063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.362462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.481470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41774","server-name":"","error":"EOF"}
	
	
	==> etcd [5677ea4b60fedf14359a3a400fb67e5d5e7b24d88130a022cc787f789cac5ddf] <==
	{"level":"info","ts":"2025-11-15T10:27:45.480200Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T10:27:45.500304Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-15T10:27:45.511796Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T10:27:45.511914Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-11-15T10:27:45.512721Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-15T10:27:45.512970Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T10:27:45.515958Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T10:27:45.743901Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T10:27:45.744014Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-742370","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-15T10:27:45.744322Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:27:45.748670Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:27:45.750456Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:45.750564Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-15T10:27:45.750796Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-15T10:27:45.750847Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-15T10:27:45.751039Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:27:45.751099Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:27:45.751143Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T10:27:45.751215Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:27:45.751252Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:27:45.751314Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:45.761430Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-15T10:27:45.761895Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:45.761968Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T10:27:45.762010Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-742370","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 10:28:24 up  5:10,  0 user,  load average: 1.99, 2.62, 2.34
	Linux pause-742370 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72a49dbd30345812d15352a78f21786b46bee08f9d96da2318bdfc3460699228] <==
	I1115 10:27:44.771547       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:27:44.798658       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:27:44.798820       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:27:44.798834       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:27:44.798846       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:27:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:27:44.993586       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:27:44.993695       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:27:44.993730       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:27:44.998743       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kindnet [eca10abde8316621413e98548b270d2b5740f5e6c6f2e387403a71e8813355f1] <==
	I1115 10:27:57.726446       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:27:57.726664       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:27:57.726818       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:27:57.726830       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:27:57.726840       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:27:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:27:58.015225       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:27:58.015331       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:27:58.015412       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:27:58.017315       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:28:02.016509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:28:02.016678       1 metrics.go:72] Registering metrics
	I1115 10:28:02.016801       1 controller.go:711] "Syncing nftables rules"
	I1115 10:28:07.931335       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:28:07.931395       1 main.go:301] handling current node
	I1115 10:28:17.927584       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:28:17.927618       1 main.go:301] handling current node
	
	
	==> kube-apiserver [66f491c30963d573277c592e9b2e25156e8a6d81a7cc25ce3ca261987f5ebf0e] <==
	I1115 10:28:01.863789       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:28:01.866343       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:28:01.866477       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:28:01.866564       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:28:01.881832       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:28:01.886156       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:28:01.886229       1 policy_source.go:240] refreshing policies
	I1115 10:28:01.896145       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:28:01.915199       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:28:01.915400       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:28:01.915516       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:28:01.915830       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:28:01.915961       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:28:01.916291       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:28:01.966561       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:28:01.968593       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:28:01.983223       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:28:02.009091       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:28:02.010537       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:28:02.510712       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:28:04.667212       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:28:06.211269       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:28:06.309922       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:28:06.408852       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:28:06.459369       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [dcfbe82e3ea3a44db46570ac44e74c82ad7b305d8adb4a6b172d8ecfc030a792] <==
	I1115 10:27:44.610575       1 options.go:263] external host was not specified, using 192.168.85.2
	I1115 10:27:44.613025       1 server.go:150] Version: v1.34.1
	I1115 10:27:44.613058       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [cfb4b2c9e131375c52820201cc2aa616b3aafdb5d46b65406a8b2f3f47825184] <==
	
	
	==> kube-controller-manager [f87e275e82af871d828ba41fd96a95ab60f2bc5453c125ca1e11b2196c628dfa] <==
	I1115 10:28:06.084591       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:28:06.089244       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:28:06.089366       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:28:06.089471       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:28:06.089521       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:28:06.089549       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:28:06.089577       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:28:06.097175       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:28:06.102594       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:28:06.102722       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:28:06.102770       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:28:06.102862       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:28:06.102905       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:28:06.103228       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:28:06.103277       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:28:06.103290       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:28:06.113294       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:28:06.134105       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:28:06.134193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:28:06.135293       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:28:06.135361       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:28:06.178770       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:28:06.221225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:28:06.221339       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:28:06.221383       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [772520002a0d2e732b637bacc7f3b9a571c747d1a191d27acbd9c780593f38eb] <==
	
	
	==> kube-proxy [ab0f0a81ef6eded6ffef0d3661cdcd2c942b9b2627c3c49c8cdd8a50142ef602] <==
	I1115 10:27:59.958830       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:28:01.354078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:28:02.589680       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:28:02.589718       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:28:02.589784       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:28:03.659872       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:28:03.659997       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:28:03.701077       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:28:03.702771       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:28:03.703512       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:28:03.704942       1 config.go:200] "Starting service config controller"
	I1115 10:28:03.705014       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:28:03.705106       1 config.go:309] "Starting node config controller"
	I1115 10:28:03.705149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:28:03.705182       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:28:03.705212       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:28:03.709064       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:28:03.705225       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:28:03.709162       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:28:03.805943       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:28:03.810172       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:28:03.810265       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [894151757a42063aa61216c10db7008aa192a0016f7f781aa5e705f8b3c03186] <==
	
	
	==> kube-scheduler [c8e184bde3a719ef981434aa1018993df941ba79e4a11831f1d812e69b3afee4] <==
	I1115 10:28:00.811305       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:28:03.811170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:28:03.811198       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:28:03.816044       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:28:03.816196       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:28:03.816154       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:28:03.816247       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:28:03.816176       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:28:03.816504       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:28:03.819697       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:28:03.819749       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:28:03.916371       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:28:03.916516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:28:03.916899       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:27:57 pause-742370 kubelet[1322]: E1115 10:27:57.543739    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-742370\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0c3dec5f461c73a098dc2e296a8d5a0b" pod="kube-system/kube-apiserver-pause-742370"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: E1115 10:27:57.543992    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-55cnz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="23e8d4ed-4a7c-4411-a5ab-ecc48346820e" pod="kube-system/coredns-66bc5c9577-55cnz"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.573520    1322 scope.go:117] "RemoveContainer" containerID="ea4fb980a515e467e33d46395276a789a4ece55251fb99babf679dbbab5e61ca"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.641297    1322 scope.go:117] "RemoveContainer" containerID="4602e2b05d93357c275e91842c3b8c26bcc12dff12d5c91834779995adcb7294"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.670637    1322 scope.go:117] "RemoveContainer" containerID="f813afbe6ccd1ed2d7cea245b742dcfab91a1cdfaedbb9e40e4563ae9760c9f6"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.718766    1322 scope.go:117] "RemoveContainer" containerID="ebfe8165e01e1232a7622dd6f871d26bdca77a08246dd8c8c9e76236d35743e0"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.736626    1322 scope.go:117] "RemoveContainer" containerID="534661e04c63930425a4633a1e9e9ed45d5dfbe868b098444d1060b9a020af8f"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.751595    1322 scope.go:117] "RemoveContainer" containerID="d3edb77378f704636b6b906c0602b5fbbc83ab4dea8ada6a5cd4185864948c6c"
	Nov 15 10:27:58 pause-742370 kubelet[1322]: I1115 10:27:58.557425    1322 scope.go:117] "RemoveContainer" containerID="760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	Nov 15 10:27:58 pause-742370 kubelet[1322]: E1115 10:27:58.558125    1322 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-pv4lm_kube-system(0743264a-d168-4c5f-ae23-90f5be7daea5)\"" pod="kube-system/coredns-66bc5c9577-pv4lm" podUID="0743264a-d168-4c5f-ae23-90f5be7daea5"
	Nov 15 10:27:58 pause-742370 kubelet[1322]: I1115 10:27:58.561897    1322 scope.go:117] "RemoveContainer" containerID="6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	Nov 15 10:27:58 pause-742370 kubelet[1322]: E1115 10:27:58.562227    1322 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-55cnz_kube-system(23e8d4ed-4a7c-4411-a5ab-ecc48346820e)\"" pod="kube-system/coredns-66bc5c9577-55cnz" podUID="23e8d4ed-4a7c-4411-a5ab-ecc48346820e"
	Nov 15 10:28:01 pause-742370 kubelet[1322]: E1115 10:28:01.886431    1322 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-742370\" is forbidden: User \"system:node:pause-742370\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-742370' and this object" podUID="f1d79d64c588692e8fc9a659e962a96e" pod="kube-system/etcd-pause-742370"
	Nov 15 10:28:03 pause-742370 kubelet[1322]: I1115 10:28:03.541871    1322 scope.go:117] "RemoveContainer" containerID="6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	Nov 15 10:28:03 pause-742370 kubelet[1322]: E1115 10:28:03.542539    1322 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-55cnz_kube-system(23e8d4ed-4a7c-4411-a5ab-ecc48346820e)\"" pod="kube-system/coredns-66bc5c9577-55cnz" podUID="23e8d4ed-4a7c-4411-a5ab-ecc48346820e"
	Nov 15 10:28:03 pause-742370 kubelet[1322]: I1115 10:28:03.546114    1322 scope.go:117] "RemoveContainer" containerID="760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	Nov 15 10:28:03 pause-742370 kubelet[1322]: E1115 10:28:03.546405    1322 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-pv4lm_kube-system(0743264a-d168-4c5f-ae23-90f5be7daea5)\"" pod="kube-system/coredns-66bc5c9577-pv4lm" podUID="0743264a-d168-4c5f-ae23-90f5be7daea5"
	Nov 15 10:28:07 pause-742370 kubelet[1322]: W1115 10:28:07.348860    1322 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 15 10:28:16 pause-742370 kubelet[1322]: I1115 10:28:16.130644    1322 scope.go:117] "RemoveContainer" containerID="6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	Nov 15 10:28:17 pause-742370 kubelet[1322]: W1115 10:28:17.363149    1322 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 15 10:28:18 pause-742370 kubelet[1322]: I1115 10:28:18.130668    1322 scope.go:117] "RemoveContainer" containerID="760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	Nov 15 10:28:20 pause-742370 kubelet[1322]: I1115 10:28:20.518025    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-55cnz" podStartSLOduration=88.518009188 podStartE2EDuration="1m28.518009188s" podCreationTimestamp="2025-11-15 10:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:27:34.365720858 +0000 UTC m=+47.579682415" watchObservedRunningTime="2025-11-15 10:28:20.518009188 +0000 UTC m=+93.731970729"
	Nov 15 10:28:21 pause-742370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:28:21 pause-742370 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:28:21 pause-742370 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-742370 -n pause-742370
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-742370 -n pause-742370: exit status 2 (437.790804ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-742370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-742370
helpers_test.go:243: (dbg) docker inspect pause-742370:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470",
	        "Created": "2025-11-15T10:26:18.346213649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 672065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:26:18.416970193Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470/hostname",
	        "HostsPath": "/var/lib/docker/containers/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470/hosts",
	        "LogPath": "/var/lib/docker/containers/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470/cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470-json.log",
	        "Name": "/pause-742370",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-742370:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-742370",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cf89b6cf4733e3baf54c62cf49a4a63593fb5dcfd3f235e0b5763b9e2412d470",
	                "LowerDir": "/var/lib/docker/overlay2/852d13fe0c28782adca5e9b489255a8f968a6cca4a54a091f2b730a05b0e919f-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/852d13fe0c28782adca5e9b489255a8f968a6cca4a54a091f2b730a05b0e919f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/852d13fe0c28782adca5e9b489255a8f968a6cca4a54a091f2b730a05b0e919f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/852d13fe0c28782adca5e9b489255a8f968a6cca4a54a091f2b730a05b0e919f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-742370",
	                "Source": "/var/lib/docker/volumes/pause-742370/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-742370",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-742370",
	                "name.minikube.sigs.k8s.io": "pause-742370",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f0d71bfbe60f0052cb05247de54c2b9acf37b1b42e2af5cb4f76d1093fcc5e70",
	            "SandboxKey": "/var/run/docker/netns/f0d71bfbe60f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33754"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33755"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33758"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33756"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33757"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-742370": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:ba:5c:c2:8d:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b743444ceeefc4c7ce9c0bb010c6f891ec5bc462da4ae50a3f11b35777d6b156",
	                    "EndpointID": "ea61f56dc674e4bf4509d2f02eac15b1b3df9a5ba0ca3c4e7c430c20a6675322",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-742370",
	                        "cf89b6cf4733"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-742370 -n pause-742370
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-742370 -n pause-742370: exit status 2 (456.817074ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-742370 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-742370 logs -n 25: (1.42894541s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-759398 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:22 UTC │ 15 Nov 25 10:22 UTC │
	│ start   │ -p missing-upgrade-372439 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-372439    │ jenkins │ v1.32.0 │ 15 Nov 25 10:22 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-759398 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:22 UTC │ 15 Nov 25 10:23 UTC │
	│ delete  │ -p NoKubernetes-759398                                                                                                                   │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-759398 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p missing-upgrade-372439 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-372439    │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:24 UTC │
	│ ssh     │ -p NoKubernetes-759398 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │                     │
	│ stop    │ -p NoKubernetes-759398                                                                                                                   │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p NoKubernetes-759398 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ ssh     │ -p NoKubernetes-759398 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │                     │
	│ delete  │ -p NoKubernetes-759398                                                                                                                   │ NoKubernetes-759398       │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:23 UTC │
	│ start   │ -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:23 UTC │ 15 Nov 25 10:24 UTC │
	│ delete  │ -p missing-upgrade-372439                                                                                                                │ missing-upgrade-372439    │ jenkins │ v1.37.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:24 UTC │
	│ start   │ -p stopped-upgrade-063492 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-063492    │ jenkins │ v1.32.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:24 UTC │
	│ stop    │ -p kubernetes-upgrade-480353                                                                                                             │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:24 UTC │
	│ start   │ -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:24 UTC │                     │
	│ stop    │ stopped-upgrade-063492 stop                                                                                                              │ stopped-upgrade-063492    │ jenkins │ v1.32.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:24 UTC │
	│ start   │ -p stopped-upgrade-063492 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-063492    │ jenkins │ v1.37.0 │ 15 Nov 25 10:24 UTC │ 15 Nov 25 10:25 UTC │
	│ delete  │ -p stopped-upgrade-063492                                                                                                                │ stopped-upgrade-063492    │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:25 UTC │
	│ start   │ -p running-upgrade-528342 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-528342    │ jenkins │ v1.32.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:25 UTC │
	│ start   │ -p running-upgrade-528342 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-528342    │ jenkins │ v1.37.0 │ 15 Nov 25 10:25 UTC │ 15 Nov 25 10:26 UTC │
	│ delete  │ -p running-upgrade-528342                                                                                                                │ running-upgrade-528342    │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │ 15 Nov 25 10:26 UTC │
	│ start   │ -p pause-742370 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-742370              │ jenkins │ v1.37.0 │ 15 Nov 25 10:26 UTC │ 15 Nov 25 10:27 UTC │
	│ start   │ -p pause-742370 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-742370              │ jenkins │ v1.37.0 │ 15 Nov 25 10:27 UTC │ 15 Nov 25 10:28 UTC │
	│ pause   │ -p pause-742370 --alsologtostderr -v=5                                                                                                   │ pause-742370              │ jenkins │ v1.37.0 │ 15 Nov 25 10:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:27:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:27:36.855187  676403 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:27:36.855352  676403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:36.855362  676403 out.go:374] Setting ErrFile to fd 2...
	I1115 10:27:36.855368  676403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:27:36.855702  676403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:27:36.856104  676403 out.go:368] Setting JSON to false
	I1115 10:27:36.857140  676403 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18608,"bootTime":1763183849,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:27:36.857213  676403 start.go:143] virtualization:  
	I1115 10:27:36.860110  676403 out.go:179] * [pause-742370] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:27:36.863986  676403 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:27:36.864094  676403 notify.go:221] Checking for updates...
	I1115 10:27:36.870109  676403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:27:36.873080  676403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:27:36.876053  676403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:27:36.878981  676403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:27:36.881937  676403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:27:36.885300  676403 config.go:182] Loaded profile config "pause-742370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:36.885969  676403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:27:36.917761  676403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:27:36.918034  676403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:27:36.984210  676403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 10:27:36.974482821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:27:36.984323  676403 docker.go:319] overlay module found
	I1115 10:27:36.987310  676403 out.go:179] * Using the docker driver based on existing profile
	I1115 10:27:36.990180  676403 start.go:309] selected driver: docker
	I1115 10:27:36.990204  676403 start.go:930] validating driver "docker" against &{Name:pause-742370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:36.990340  676403 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:27:36.990461  676403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:27:37.051488  676403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 10:27:37.042579815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:27:37.051994  676403 cni.go:84] Creating CNI manager for ""
	I1115 10:27:37.052058  676403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:37.052111  676403 start.go:353] cluster config:
	{Name:pause-742370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:37.057197  676403 out.go:179] * Starting "pause-742370" primary control-plane node in "pause-742370" cluster
	I1115 10:27:37.060052  676403 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:27:37.062962  676403 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:27:37.065784  676403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:27:37.065836  676403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:27:37.065852  676403 cache.go:65] Caching tarball of preloaded images
	I1115 10:27:37.065890  676403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:27:37.065938  676403 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:27:37.065948  676403 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:27:37.066099  676403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/config.json ...
	I1115 10:27:37.086102  676403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:27:37.086125  676403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:27:37.086145  676403 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:27:37.086168  676403 start.go:360] acquireMachinesLock for pause-742370: {Name:mke364f2e8b67d701cb09d47fae8f68eed7d5351 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:27:37.086224  676403 start.go:364] duration metric: took 35.864µs to acquireMachinesLock for "pause-742370"
	I1115 10:27:37.086249  676403 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:27:37.086259  676403 fix.go:54] fixHost starting: 
	I1115 10:27:37.086526  676403 cli_runner.go:164] Run: docker container inspect pause-742370 --format={{.State.Status}}
	I1115 10:27:37.104475  676403 fix.go:112] recreateIfNeeded on pause-742370: state=Running err=<nil>
	W1115 10:27:37.104511  676403 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:27:38.365820  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:38.366247  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:27:38.366295  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:27:38.366356  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:27:38.394603  661959 cri.go:89] found id: "ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:27:38.394633  661959 cri.go:89] found id: ""
	I1115 10:27:38.394641  661959 logs.go:282] 1 containers: [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1]
	I1115 10:27:38.394697  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:38.398431  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:27:38.398501  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:27:38.434175  661959 cri.go:89] found id: ""
	I1115 10:27:38.434201  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.434210  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:27:38.434217  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:27:38.434274  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:27:38.478808  661959 cri.go:89] found id: ""
	I1115 10:27:38.478839  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.478848  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:27:38.478857  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:27:38.478915  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:27:38.521824  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:27:38.521847  661959 cri.go:89] found id: ""
	I1115 10:27:38.521865  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:27:38.521928  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:38.526273  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:27:38.526348  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:27:38.567437  661959 cri.go:89] found id: ""
	I1115 10:27:38.567465  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.567473  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:27:38.567479  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:27:38.567536  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:27:38.595104  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:27:38.595127  661959 cri.go:89] found id: ""
	I1115 10:27:38.595135  661959 logs.go:282] 1 containers: [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:27:38.595188  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:38.599537  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:27:38.599606  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:27:38.637115  661959 cri.go:89] found id: ""
	I1115 10:27:38.637206  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.637234  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:27:38.637270  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:27:38.637369  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:27:38.668663  661959 cri.go:89] found id: ""
	I1115 10:27:38.668730  661959 logs.go:282] 0 containers: []
	W1115 10:27:38.668742  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:27:38.668751  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:27:38.668762  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:27:38.727300  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:27:38.727336  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:27:38.752479  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:27:38.752515  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:27:38.808943  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:27:38.808979  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:27:38.837970  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:27:38.838001  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:27:38.958768  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:27:38.958807  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:27:38.977499  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:27:38.977530  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:27:39.045269  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:27:39.045290  661959 logs.go:123] Gathering logs for kube-apiserver [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1] ...
	I1115 10:27:39.045305  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:27:37.107750  676403 out.go:252] * Updating the running docker "pause-742370" container ...
	I1115 10:27:37.107791  676403 machine.go:94] provisionDockerMachine start ...
	I1115 10:27:37.107894  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:37.124845  676403 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:37.125156  676403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33754 <nil> <nil>}
	I1115 10:27:37.125173  676403 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:27:37.281565  676403 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-742370
	
	I1115 10:27:37.281587  676403 ubuntu.go:182] provisioning hostname "pause-742370"
	I1115 10:27:37.281706  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:37.309778  676403 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:37.310094  676403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33754 <nil> <nil>}
	I1115 10:27:37.310111  676403 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-742370 && echo "pause-742370" | sudo tee /etc/hostname
	I1115 10:27:37.474557  676403 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-742370
	
	I1115 10:27:37.474632  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:37.493128  676403 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:37.493442  676403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33754 <nil> <nil>}
	I1115 10:27:37.493465  676403 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-742370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-742370/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-742370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:27:37.646069  676403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:27:37.646092  676403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:27:37.646110  676403 ubuntu.go:190] setting up certificates
	I1115 10:27:37.646120  676403 provision.go:84] configureAuth start
	I1115 10:27:37.646177  676403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-742370
	I1115 10:27:37.663872  676403 provision.go:143] copyHostCerts
	I1115 10:27:37.663952  676403 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:27:37.663967  676403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:27:37.664042  676403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:27:37.664140  676403 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:27:37.664146  676403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:27:37.664176  676403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:27:37.664223  676403 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:27:37.664228  676403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:27:37.664249  676403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:27:37.664296  676403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.pause-742370 san=[127.0.0.1 192.168.85.2 localhost minikube pause-742370]
	I1115 10:27:38.050246  676403 provision.go:177] copyRemoteCerts
	I1115 10:27:38.050350  676403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:27:38.050402  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:38.070099  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:38.178329  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:27:38.196071  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 10:27:38.226732  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:27:38.245203  676403 provision.go:87] duration metric: took 599.059835ms to configureAuth
	I1115 10:27:38.245231  676403 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:27:38.245457  676403 config.go:182] Loaded profile config "pause-742370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:38.245565  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:38.266307  676403 main.go:143] libmachine: Using SSH client type: native
	I1115 10:27:38.266620  676403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33754 <nil> <nil>}
	I1115 10:27:38.266641  676403 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:27:41.577436  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:27:43.606938  676403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:27:43.606961  676403 machine.go:97] duration metric: took 6.49915394s to provisionDockerMachine
	I1115 10:27:43.606972  676403 start.go:293] postStartSetup for "pause-742370" (driver="docker")
	I1115 10:27:43.606983  676403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:27:43.607048  676403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:27:43.607101  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:43.628070  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:43.734137  676403 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:27:43.737547  676403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:27:43.737578  676403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:27:43.737590  676403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:27:43.737674  676403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:27:43.737772  676403 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:27:43.737876  676403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:27:43.745305  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:27:43.763353  676403 start.go:296] duration metric: took 156.364146ms for postStartSetup
	I1115 10:27:43.763432  676403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:27:43.763477  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:43.780920  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:43.887164  676403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:27:43.892905  676403 fix.go:56] duration metric: took 6.80663736s for fixHost
	I1115 10:27:43.892930  676403 start.go:83] releasing machines lock for "pause-742370", held for 6.806692974s
	I1115 10:27:43.893013  676403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-742370
	I1115 10:27:43.909703  676403 ssh_runner.go:195] Run: cat /version.json
	I1115 10:27:43.909761  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:43.909853  676403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:27:43.909936  676403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-742370
	I1115 10:27:43.932441  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:43.950444  676403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33754 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/pause-742370/id_rsa Username:docker}
	I1115 10:27:44.041299  676403 ssh_runner.go:195] Run: systemctl --version
	I1115 10:27:44.138243  676403 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:27:44.180266  676403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:27:44.184974  676403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:27:44.185083  676403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:27:44.193047  676403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:27:44.193072  676403 start.go:496] detecting cgroup driver to use...
	I1115 10:27:44.193106  676403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:27:44.193159  676403 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:27:44.209183  676403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:27:44.222357  676403 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:27:44.222458  676403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:27:44.238486  676403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:27:44.251673  676403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:27:44.408485  676403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:27:44.648015  676403 docker.go:234] disabling docker service ...
	I1115 10:27:44.648118  676403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:27:44.682172  676403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:27:44.710509  676403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:27:44.967610  676403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:27:45.271336  676403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:27:45.293432  676403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:27:45.320935  676403 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:27:45.321039  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.337196  676403 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:27:45.337284  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.347384  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.362881  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.379899  676403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:27:45.393320  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.420737  676403 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.452318  676403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:27:45.471238  676403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:27:45.486867  676403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:27:45.498953  676403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:45.727501  676403 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:27:46.578210  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1115 10:27:46.578269  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:27:46.578345  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:27:46.604540  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:27:46.604564  661959 cri.go:89] found id: "ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:27:46.604570  661959 cri.go:89] found id: ""
	I1115 10:27:46.604577  661959 logs.go:282] 2 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1]
	I1115 10:27:46.604635  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:46.608309  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:46.611800  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:27:46.611873  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:27:46.639606  661959 cri.go:89] found id: ""
	I1115 10:27:46.639632  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.639641  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:27:46.639650  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:27:46.639763  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:27:46.668414  661959 cri.go:89] found id: ""
	I1115 10:27:46.668440  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.668449  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:27:46.668455  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:27:46.668514  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:27:46.695459  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:27:46.695483  661959 cri.go:89] found id: ""
	I1115 10:27:46.695492  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:27:46.695546  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:46.699223  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:27:46.699293  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:27:46.724723  661959 cri.go:89] found id: ""
	I1115 10:27:46.724746  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.724754  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:27:46.724761  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:27:46.724819  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:27:46.751093  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:27:46.751114  661959 cri.go:89] found id: ""
	I1115 10:27:46.751123  661959 logs.go:282] 1 containers: [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:27:46.751177  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:27:46.754854  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:27:46.754923  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:27:46.780259  661959 cri.go:89] found id: ""
	I1115 10:27:46.780285  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.780294  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:27:46.780300  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:27:46.780358  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:27:46.812599  661959 cri.go:89] found id: ""
	I1115 10:27:46.812625  661959 logs.go:282] 0 containers: []
	W1115 10:27:46.812634  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:27:46.812648  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:27:46.812660  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:27:46.931609  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:27:46.931648  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1115 10:27:55.180781  676403 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.453243052s)
	I1115 10:27:55.180804  676403 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:27:55.180870  676403 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:27:55.190541  676403 start.go:564] Will wait 60s for crictl version
	I1115 10:27:55.190622  676403 ssh_runner.go:195] Run: which crictl
	I1115 10:27:55.194809  676403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:27:55.219911  676403 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:27:55.220002  676403 ssh_runner.go:195] Run: crio --version
	I1115 10:27:55.251012  676403 ssh_runner.go:195] Run: crio --version
	I1115 10:27:55.281676  676403 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:27:55.284764  676403 cli_runner.go:164] Run: docker network inspect pause-742370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:27:55.300062  676403 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:27:55.304444  676403 kubeadm.go:884] updating cluster {Name:pause-742370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:27:55.304586  676403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:27:55.304643  676403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:55.340284  676403 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:55.340308  676403 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:27:55.340371  676403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:27:55.365962  676403 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:27:55.365985  676403 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:27:55.365994  676403 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1115 10:27:55.366092  676403 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-742370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:27:55.366175  676403 ssh_runner.go:195] Run: crio config
	I1115 10:27:55.428913  676403 cni.go:84] Creating CNI manager for ""
	I1115 10:27:55.428936  676403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:27:55.428959  676403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:27:55.428982  676403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-742370 NodeName:pause-742370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:27:55.429106  676403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-742370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:27:55.429181  676403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:27:55.437553  676403 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:27:55.437645  676403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:27:55.445218  676403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1115 10:27:55.457578  676403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:27:55.471610  676403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1115 10:27:55.484026  676403 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:27:55.488973  676403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:55.635797  676403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:55.649953  676403 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370 for IP: 192.168.85.2
	I1115 10:27:55.650031  676403 certs.go:195] generating shared ca certs ...
	I1115 10:27:55.650057  676403 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:55.650206  676403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:27:55.650260  676403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:27:55.650286  676403 certs.go:257] generating profile certs ...
	I1115 10:27:55.650383  676403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.key
	I1115 10:27:55.650450  676403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/apiserver.key.57edb4e6
	I1115 10:27:55.650529  676403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/proxy-client.key
	I1115 10:27:55.650640  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:27:55.650673  676403 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:27:55.650685  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:27:55.650708  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:27:55.650732  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:27:55.650758  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:27:55.650805  676403 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:27:55.651383  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:27:55.669769  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:27:55.686571  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:27:55.704217  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:27:55.720625  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 10:27:55.737994  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:27:55.755106  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:27:55.772322  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:27:55.789192  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:27:55.806155  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:27:55.822821  676403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:27:55.848033  676403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:27:55.860488  676403 ssh_runner.go:195] Run: openssl version
	I1115 10:27:55.866743  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:27:55.875261  676403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:27:55.878860  676403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:27:55.878921  676403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:27:55.919448  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:27:55.927503  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:27:55.935935  676403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:55.939504  676403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:55.939579  676403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:27:55.981175  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:27:55.988970  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:27:55.997298  676403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:27:56.001888  676403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:27:56.002014  676403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:27:56.045408  676403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:27:56.053703  676403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:27:56.057549  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:27:56.098596  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:27:56.140675  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:27:56.181431  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:27:56.224057  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:27:56.264971  676403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:27:56.305796  676403 kubeadm.go:401] StartCluster: {Name:pause-742370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-742370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:27:56.305911  676403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:27:56.306006  676403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:27:56.334491  676403 cri.go:89] found id: "772520002a0d2e732b637bacc7f3b9a571c747d1a191d27acbd9c780593f38eb"
	I1115 10:27:56.334514  676403 cri.go:89] found id: "894151757a42063aa61216c10db7008aa192a0016f7f781aa5e705f8b3c03186"
	I1115 10:27:56.334521  676403 cri.go:89] found id: "6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	I1115 10:27:56.334525  676403 cri.go:89] found id: "72a49dbd30345812d15352a78f21786b46bee08f9d96da2318bdfc3460699228"
	I1115 10:27:56.334528  676403 cri.go:89] found id: "760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	I1115 10:27:56.334532  676403 cri.go:89] found id: "cfb4b2c9e131375c52820201cc2aa616b3aafdb5d46b65406a8b2f3f47825184"
	I1115 10:27:56.334535  676403 cri.go:89] found id: "dcfbe82e3ea3a44db46570ac44e74c82ad7b305d8adb4a6b172d8ecfc030a792"
	I1115 10:27:56.334562  676403 cri.go:89] found id: "5677ea4b60fedf14359a3a400fb67e5d5e7b24d88130a022cc787f789cac5ddf"
	I1115 10:27:56.334571  676403 cri.go:89] found id: "6f1a31e05552c254ec0bd0ee9f0c3d765425a121cef0b862f4c97573e8983092"
	I1115 10:27:56.334580  676403 cri.go:89] found id: "ea4fb980a515e467e33d46395276a789a4ece55251fb99babf679dbbab5e61ca"
	I1115 10:27:56.334583  676403 cri.go:89] found id: "d3edb77378f704636b6b906c0602b5fbbc83ab4dea8ada6a5cd4185864948c6c"
	I1115 10:27:56.334587  676403 cri.go:89] found id: "c64150522e89e835bdfa195e37765ff2ef4ae63ad8d7025bc5e5b9f075cb55af"
	I1115 10:27:56.334590  676403 cri.go:89] found id: "534661e04c63930425a4633a1e9e9ed45d5dfbe868b098444d1060b9a020af8f"
	I1115 10:27:56.334593  676403 cri.go:89] found id: "4602e2b05d93357c275e91842c3b8c26bcc12dff12d5c91834779995adcb7294"
	I1115 10:27:56.334599  676403 cri.go:89] found id: "f813afbe6ccd1ed2d7cea245b742dcfab91a1cdfaedbb9e40e4563ae9760c9f6"
	I1115 10:27:56.334605  676403 cri.go:89] found id: "ebfe8165e01e1232a7622dd6f871d26bdca77a08246dd8c8c9e76236d35743e0"
	I1115 10:27:56.334611  676403 cri.go:89] found id: ""
	I1115 10:27:56.334681  676403 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:27:56.346521  676403 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:27:56Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:27:56.346631  676403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:27:56.355302  676403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:27:56.355323  676403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:27:56.355398  676403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:27:56.363009  676403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:27:56.363653  676403 kubeconfig.go:125] found "pause-742370" server: "https://192.168.85.2:8443"
	I1115 10:27:56.364473  676403 kapi.go:59] client config for pause-742370: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.key", CAFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:27:56.364979  676403 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 10:27:56.365000  676403 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 10:27:56.365007  676403 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 10:27:56.365012  676403 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 10:27:56.365020  676403 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 10:27:56.365292  676403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:27:56.373373  676403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:27:56.373406  676403 kubeadm.go:602] duration metric: took 18.077128ms to restartPrimaryControlPlane
	I1115 10:27:56.373416  676403 kubeadm.go:403] duration metric: took 67.640692ms to StartCluster
	I1115 10:27:56.373435  676403 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:56.373499  676403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:27:56.374459  676403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:27:56.374725  676403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:27:56.375162  676403 config.go:182] Loaded profile config "pause-742370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:27:56.375247  676403 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:27:56.381185  676403 out.go:179] * Verifying Kubernetes components...
	I1115 10:27:56.381187  676403 out.go:179] * Enabled addons: 
	I1115 10:27:56.384020  676403 addons.go:515] duration metric: took 8.767671ms for enable addons: enabled=[]
	I1115 10:27:56.384070  676403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:27:56.531940  676403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:27:56.546817  676403 node_ready.go:35] waiting up to 6m0s for node "pause-742370" to be "Ready" ...
	I1115 10:27:57.005893  661959 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.074217346s)
	W1115 10:27:57.005947  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1115 10:27:57.005956  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:27:57.005967  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:27:57.032530  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:27:57.032556  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:27:57.093734  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:27:57.093772  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:27:57.129510  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:27:57.129537  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:27:57.148838  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:27:57.148869  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:27:57.183194  661959 logs.go:123] Gathering logs for kube-apiserver [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1] ...
	I1115 10:27:57.183228  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:27:57.219528  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:27:57.219560  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:27:59.778537  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:01.890243  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:41472->192.168.76.2:8443: read: connection reset by peer
	I1115 10:28:01.890328  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:01.890422  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:01.958927  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:01.958961  661959 cri.go:89] found id: "ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	I1115 10:28:01.958968  661959 cri.go:89] found id: ""
	I1115 10:28:01.958976  661959 logs.go:282] 2 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1]
	I1115 10:28:01.959078  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:01.963082  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:01.969216  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:01.969334  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:02.029344  661959 cri.go:89] found id: ""
	I1115 10:28:02.029381  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.029391  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:02.029397  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:02.029491  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:02.094011  661959 cri.go:89] found id: ""
	I1115 10:28:02.094047  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.094071  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:02.094082  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:02.094244  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:02.158235  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:02.158266  661959 cri.go:89] found id: ""
	I1115 10:28:02.158281  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:02.158378  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:02.163790  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:02.163911  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:02.225684  661959 cri.go:89] found id: ""
	I1115 10:28:02.225718  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.225727  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:02.225760  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:02.225843  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:02.273269  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:02.273292  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:02.273298  661959 cri.go:89] found id: ""
	I1115 10:28:02.273306  661959 logs.go:282] 2 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:28:02.273407  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:02.277479  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:02.285514  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:02.285639  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:02.329220  661959 cri.go:89] found id: ""
	I1115 10:28:02.329283  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.329298  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:02.329305  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:02.329383  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:02.383039  661959 cri.go:89] found id: ""
	I1115 10:28:02.383069  661959 logs.go:282] 0 containers: []
	W1115 10:28:02.383078  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:02.383119  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:02.383139  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:02.436074  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:02.436101  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:28:02.580705  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:02.580763  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:02.606032  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:02.606103  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:02.737585  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:02.737682  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:02.737711  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:02.792824  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:28:02.792898  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:02.859114  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:02.859136  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:02.947920  661959 logs.go:123] Gathering logs for kube-apiserver [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1] ...
	I1115 10:28:02.948022  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	W1115 10:28:03.022106  661959 logs.go:130] failed kube-apiserver [ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1": Process exited with status 1
	stdout:
	
	stderr:
	E1115 10:28:03.019254    4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1\": container with ID starting with ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1 not found: ID does not exist" containerID="ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	time="2025-11-15T10:28:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1\": container with ID starting with ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1115 10:28:03.019254    4637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1\": container with ID starting with ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1 not found: ID does not exist" containerID="ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1"
	time="2025-11-15T10:28:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1\": container with ID starting with ff849cfb44a2497cbe2886e097e7090b62b9e63f76ee05e7570720e9db2da1b1 not found: ID does not exist"
	
	** /stderr **
	I1115 10:28:03.022124  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:03.022142  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:03.107920  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:03.108001  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:05.648616  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:05.649020  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:05.649059  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:05.649118  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:05.676068  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:05.676090  661959 cri.go:89] found id: ""
	I1115 10:28:05.676097  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:05.676155  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:05.679850  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:05.679930  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:05.709032  661959 cri.go:89] found id: ""
	I1115 10:28:05.709061  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.709076  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:05.709083  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:05.709141  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:05.735315  661959 cri.go:89] found id: ""
	I1115 10:28:05.735341  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.735351  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:05.735357  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:05.735416  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:05.763120  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:05.763148  661959 cri.go:89] found id: ""
	I1115 10:28:05.763158  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:05.763228  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:05.767147  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:05.767223  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:05.794594  661959 cri.go:89] found id: ""
	I1115 10:28:05.794620  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.794629  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:05.794637  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:05.794726  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:05.827545  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:05.827568  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:05.827573  661959 cri.go:89] found id: ""
	I1115 10:28:05.827580  661959 logs.go:282] 2 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:28:05.827640  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:05.831659  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:05.835230  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:05.835299  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:05.866245  661959 cri.go:89] found id: ""
	I1115 10:28:05.866268  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.866276  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:05.866282  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:05.866347  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:05.892865  661959 cri.go:89] found id: ""
	I1115 10:28:05.892931  661959 logs.go:282] 0 containers: []
	W1115 10:28:05.892954  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:05.892991  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:05.893022  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:05.911076  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:05.911107  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:05.948141  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:05.948174  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:06.028323  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:06.028362  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:06.077053  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:06.077082  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:28:06.213526  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:06.213619  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:06.290582  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:06.290605  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:06.290621  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:06.353623  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:06.353664  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:06.379765  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:28:06.379792  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:02.023267  676403 node_ready.go:49] node "pause-742370" is "Ready"
	I1115 10:28:02.023300  676403 node_ready.go:38] duration metric: took 5.476438461s for node "pause-742370" to be "Ready" ...
	I1115 10:28:02.023321  676403 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:28:02.023385  676403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:28:02.051298  676403 api_server.go:72] duration metric: took 5.676535508s to wait for apiserver process to appear ...
	I1115 10:28:02.051323  676403 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:28:02.051343  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:02.091049  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:28:02.091163  676403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:28:02.552293  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:02.574077  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:28:02.574122  676403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:28:03.051700  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:03.099371  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:28:03.099400  676403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:28:03.551999  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:03.569236  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:28:03.569261  676403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:28:04.051982  676403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:28:04.060178  676403 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 10:28:04.061241  676403 api_server.go:141] control plane version: v1.34.1
	I1115 10:28:04.061267  676403 api_server.go:131] duration metric: took 2.009936241s to wait for apiserver health ...
	I1115 10:28:04.061277  676403 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:28:04.065867  676403 system_pods.go:59] 8 kube-system pods found
	I1115 10:28:04.065913  676403 system_pods.go:61] "coredns-66bc5c9577-55cnz" [23e8d4ed-4a7c-4411-a5ab-ecc48346820e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:28:04.065948  676403 system_pods.go:61] "coredns-66bc5c9577-pv4lm" [0743264a-d168-4c5f-ae23-90f5be7daea5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:28:04.065964  676403 system_pods.go:61] "etcd-pause-742370" [9739e739-7c92-445a-900f-865eb5f17743] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:28:04.065970  676403 system_pods.go:61] "kindnet-9xgvp" [8db4a3c3-62e1-41af-9e3a-84e123082d25] Running
	I1115 10:28:04.065980  676403 system_pods.go:61] "kube-apiserver-pause-742370" [6a524cfc-f284-49f7-aaab-599544ba7b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:28:04.065999  676403 system_pods.go:61] "kube-controller-manager-pause-742370" [2580948b-a3dc-4cd5-aaf9-0b5ac5d70aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:28:04.066004  676403 system_pods.go:61] "kube-proxy-mcjx7" [828ebe3d-841e-4dba-b3d3-924cc9a20bf4] Running
	I1115 10:28:04.066026  676403 system_pods.go:61] "kube-scheduler-pause-742370" [c32c0652-1422-4bab-a7e9-64386dd7550a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:28:04.066037  676403 system_pods.go:74] duration metric: took 4.735624ms to wait for pod list to return data ...
	I1115 10:28:04.066065  676403 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:28:04.068980  676403 default_sa.go:45] found service account: "default"
	I1115 10:28:04.069011  676403 default_sa.go:55] duration metric: took 2.938784ms for default service account to be created ...
	I1115 10:28:04.069022  676403 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:28:04.072306  676403 system_pods.go:86] 8 kube-system pods found
	I1115 10:28:04.072342  676403 system_pods.go:89] "coredns-66bc5c9577-55cnz" [23e8d4ed-4a7c-4411-a5ab-ecc48346820e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:28:04.072352  676403 system_pods.go:89] "coredns-66bc5c9577-pv4lm" [0743264a-d168-4c5f-ae23-90f5be7daea5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:28:04.072360  676403 system_pods.go:89] "etcd-pause-742370" [9739e739-7c92-445a-900f-865eb5f17743] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:28:04.072365  676403 system_pods.go:89] "kindnet-9xgvp" [8db4a3c3-62e1-41af-9e3a-84e123082d25] Running
	I1115 10:28:04.072371  676403 system_pods.go:89] "kube-apiserver-pause-742370" [6a524cfc-f284-49f7-aaab-599544ba7b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:28:04.072377  676403 system_pods.go:89] "kube-controller-manager-pause-742370" [2580948b-a3dc-4cd5-aaf9-0b5ac5d70aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:28:04.072388  676403 system_pods.go:89] "kube-proxy-mcjx7" [828ebe3d-841e-4dba-b3d3-924cc9a20bf4] Running
	I1115 10:28:04.072396  676403 system_pods.go:89] "kube-scheduler-pause-742370" [c32c0652-1422-4bab-a7e9-64386dd7550a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:28:04.072414  676403 system_pods.go:126] duration metric: took 3.386246ms to wait for k8s-apps to be running ...
	I1115 10:28:04.072422  676403 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:28:04.072478  676403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:28:04.085729  676403 system_svc.go:56] duration metric: took 13.296343ms WaitForService to wait for kubelet
	I1115 10:28:04.085808  676403 kubeadm.go:587] duration metric: took 7.711050549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:28:04.085834  676403 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:28:04.088804  676403 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:28:04.088876  676403 node_conditions.go:123] node cpu capacity is 2
	I1115 10:28:04.088895  676403 node_conditions.go:105] duration metric: took 3.054087ms to run NodePressure ...
	I1115 10:28:04.088909  676403 start.go:242] waiting for startup goroutines ...
	I1115 10:28:04.088916  676403 start.go:247] waiting for cluster config update ...
	I1115 10:28:04.088925  676403 start.go:256] writing updated cluster config ...
	I1115 10:28:04.089233  676403 ssh_runner.go:195] Run: rm -f paused
	I1115 10:28:04.093002  676403 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:28:04.093792  676403 kapi.go:59] client config for pause-742370: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/profiles/pause-742370/client.key", CAFile:"/home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 10:28:04.097118  676403 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-55cnz" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:28:06.130351  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	I1115 10:28:08.926437  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:08.926956  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:08.927011  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:08.927081  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:08.966639  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:08.966661  661959 cri.go:89] found id: ""
	I1115 10:28:08.966670  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:08.966723  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:08.970508  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:08.970575  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:08.998980  661959 cri.go:89] found id: ""
	I1115 10:28:08.999005  661959 logs.go:282] 0 containers: []
	W1115 10:28:08.999023  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:08.999029  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:08.999088  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:09.030521  661959 cri.go:89] found id: ""
	I1115 10:28:09.030546  661959 logs.go:282] 0 containers: []
	W1115 10:28:09.030554  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:09.030561  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:09.030620  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:09.055614  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:09.055637  661959 cri.go:89] found id: ""
	I1115 10:28:09.055645  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:09.055702  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:09.059252  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:09.059324  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:09.088097  661959 cri.go:89] found id: ""
	I1115 10:28:09.088124  661959 logs.go:282] 0 containers: []
	W1115 10:28:09.088134  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:09.088144  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:09.088203  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:09.118860  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:09.118882  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:09.118888  661959 cri.go:89] found id: ""
	I1115 10:28:09.118895  661959 logs.go:282] 2 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:28:09.118971  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:09.122640  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:09.126084  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:09.126170  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:09.151459  661959 cri.go:89] found id: ""
	I1115 10:28:09.151521  661959 logs.go:282] 0 containers: []
	W1115 10:28:09.151539  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:09.151547  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:09.151607  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:09.175914  661959 cri.go:89] found id: ""
	I1115 10:28:09.175981  661959 logs.go:282] 0 containers: []
	W1115 10:28:09.175996  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:09.176012  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:09.176026  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:09.193522  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:09.193551  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:09.261265  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:09.261298  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:09.261318  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:09.322283  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:09.322321  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:09.350344  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:28:09.350373  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:09.377908  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:09.377938  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:09.446289  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:09.446325  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:09.481249  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:09.481278  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:09.514603  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:09.514638  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1115 10:28:08.603202  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	W1115 10:28:10.604201  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	I1115 10:28:12.136621  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:12.137069  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:12.137116  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:12.137182  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:12.164325  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:12.164396  661959 cri.go:89] found id: ""
	I1115 10:28:12.164420  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:12.164509  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:12.168263  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:12.168397  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:12.195238  661959 cri.go:89] found id: ""
	I1115 10:28:12.195264  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.195273  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:12.195280  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:12.195359  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:12.222866  661959 cri.go:89] found id: ""
	I1115 10:28:12.222898  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.222907  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:12.222914  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:12.223015  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:12.256665  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:12.256689  661959 cri.go:89] found id: ""
	I1115 10:28:12.256698  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:12.256775  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:12.260498  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:12.260575  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:12.289038  661959 cri.go:89] found id: ""
	I1115 10:28:12.289112  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.289138  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:12.289164  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:12.289253  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:12.316558  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:12.316635  661959 cri.go:89] found id: "7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:12.316654  661959 cri.go:89] found id: ""
	I1115 10:28:12.316681  661959 logs.go:282] 2 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3]
	I1115 10:28:12.316770  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:12.320522  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:12.323989  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:12.324065  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:12.349870  661959 cri.go:89] found id: ""
	I1115 10:28:12.349945  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.349982  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:12.350018  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:12.350111  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:12.376779  661959 cri.go:89] found id: ""
	I1115 10:28:12.376801  661959 logs.go:282] 0 containers: []
	W1115 10:28:12.376809  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:12.376823  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:12.376836  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:12.394919  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:12.395004  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:12.465076  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:12.465140  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:12.465168  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:12.496317  661959 logs.go:123] Gathering logs for kube-controller-manager [7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3] ...
	I1115 10:28:12.496346  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7ced6223739eab64a607e27d1a3c738534cd6a9fa664682e2b3d42264c73b1d3"
	I1115 10:28:12.522219  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:12.522247  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:12.587519  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:12.587556  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:12.630888  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:12.630920  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:28:12.752824  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:12.752863  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:12.791436  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:12.791470  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:15.355127  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:15.355589  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:15.355655  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:15.355727  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:15.382489  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:15.382509  661959 cri.go:89] found id: ""
	I1115 10:28:15.382517  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:15.382570  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:15.386326  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:15.386446  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:15.415120  661959 cri.go:89] found id: ""
	I1115 10:28:15.415196  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.415221  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:15.415241  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:15.415322  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:15.442506  661959 cri.go:89] found id: ""
	I1115 10:28:15.442532  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.442542  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:15.442548  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:15.442666  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:15.471329  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:15.471360  661959 cri.go:89] found id: ""
	I1115 10:28:15.471369  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:15.471433  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:15.475454  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:15.475576  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:15.501796  661959 cri.go:89] found id: ""
	I1115 10:28:15.501859  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.501882  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:15.501908  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:15.501990  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:15.530764  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:15.530786  661959 cri.go:89] found id: ""
	I1115 10:28:15.530795  661959 logs.go:282] 1 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e]
	I1115 10:28:15.530875  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:15.534662  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:15.534746  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:15.575537  661959 cri.go:89] found id: ""
	I1115 10:28:15.575614  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.575637  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:15.575662  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:15.575773  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:15.613413  661959 cri.go:89] found id: ""
	I1115 10:28:15.613489  661959 logs.go:282] 0 containers: []
	W1115 10:28:15.613513  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:15.613555  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:15.613587  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:15.661195  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:15.661225  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 10:28:15.795501  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:15.795542  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:15.814627  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:15.814657  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:15.890459  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:15.890477  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:15.890490  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:15.926113  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:15.926144  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:15.989083  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:15.989118  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:16.018212  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:16.018242  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1115 10:28:12.604274  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	W1115 10:28:15.104196  676403 pod_ready.go:104] pod "coredns-66bc5c9577-55cnz" is not "Ready", error: <nil>
	I1115 10:28:17.102098  676403 pod_ready.go:94] pod "coredns-66bc5c9577-55cnz" is "Ready"
	I1115 10:28:17.102123  676403 pod_ready.go:86] duration metric: took 13.004976684s for pod "coredns-66bc5c9577-55cnz" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:17.102133  676403 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pv4lm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.108336  676403 pod_ready.go:94] pod "coredns-66bc5c9577-pv4lm" is "Ready"
	I1115 10:28:19.108366  676403 pod_ready.go:86] duration metric: took 2.006225097s for pod "coredns-66bc5c9577-pv4lm" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.111554  676403 pod_ready.go:83] waiting for pod "etcd-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.134365  676403 pod_ready.go:94] pod "etcd-pause-742370" is "Ready"
	I1115 10:28:19.134394  676403 pod_ready.go:86] duration metric: took 22.812317ms for pod "etcd-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.136940  676403 pod_ready.go:83] waiting for pod "kube-apiserver-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.145712  676403 pod_ready.go:94] pod "kube-apiserver-pause-742370" is "Ready"
	I1115 10:28:19.145734  676403 pod_ready.go:86] duration metric: took 8.773759ms for pod "kube-apiserver-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.147875  676403 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.306603  676403 pod_ready.go:94] pod "kube-controller-manager-pause-742370" is "Ready"
	I1115 10:28:19.306628  676403 pod_ready.go:86] duration metric: took 158.737296ms for pod "kube-controller-manager-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.506616  676403 pod_ready.go:83] waiting for pod "kube-proxy-mcjx7" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:19.906314  676403 pod_ready.go:94] pod "kube-proxy-mcjx7" is "Ready"
	I1115 10:28:19.906344  676403 pod_ready.go:86] duration metric: took 399.696357ms for pod "kube-proxy-mcjx7" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:20.106755  676403 pod_ready.go:83] waiting for pod "kube-scheduler-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:20.506909  676403 pod_ready.go:94] pod "kube-scheduler-pause-742370" is "Ready"
	I1115 10:28:20.506936  676403 pod_ready.go:86] duration metric: took 400.151951ms for pod "kube-scheduler-pause-742370" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:28:20.506950  676403 pod_ready.go:40] duration metric: took 16.413914858s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:28:20.562461  676403 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:28:20.565742  676403 out.go:179] * Done! kubectl is now configured to use "pause-742370" cluster and "default" namespace by default
	I1115 10:28:18.581688  661959 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:28:18.582066  661959 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 10:28:18.582115  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 10:28:18.582180  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 10:28:18.612297  661959 cri.go:89] found id: "d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:18.612320  661959 cri.go:89] found id: ""
	I1115 10:28:18.612329  661959 logs.go:282] 1 containers: [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af]
	I1115 10:28:18.612391  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:18.616906  661959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 10:28:18.616985  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 10:28:18.656903  661959 cri.go:89] found id: ""
	I1115 10:28:18.656930  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.656939  661959 logs.go:284] No container was found matching "etcd"
	I1115 10:28:18.656945  661959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 10:28:18.657010  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 10:28:18.700769  661959 cri.go:89] found id: ""
	I1115 10:28:18.700795  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.700804  661959 logs.go:284] No container was found matching "coredns"
	I1115 10:28:18.700811  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 10:28:18.700868  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 10:28:18.735972  661959 cri.go:89] found id: "9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:18.736007  661959 cri.go:89] found id: ""
	I1115 10:28:18.736016  661959 logs.go:282] 1 containers: [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1]
	I1115 10:28:18.736089  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:18.739903  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 10:28:18.740000  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 10:28:18.770126  661959 cri.go:89] found id: ""
	I1115 10:28:18.770192  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.770217  661959 logs.go:284] No container was found matching "kube-proxy"
	I1115 10:28:18.770232  661959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 10:28:18.770304  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 10:28:18.802946  661959 cri.go:89] found id: "035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:18.802970  661959 cri.go:89] found id: ""
	I1115 10:28:18.802978  661959 logs.go:282] 1 containers: [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e]
	I1115 10:28:18.803032  661959 ssh_runner.go:195] Run: which crictl
	I1115 10:28:18.807687  661959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 10:28:18.807748  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 10:28:18.850628  661959 cri.go:89] found id: ""
	I1115 10:28:18.850650  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.850659  661959 logs.go:284] No container was found matching "kindnet"
	I1115 10:28:18.850665  661959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 10:28:18.850726  661959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 10:28:18.882857  661959 cri.go:89] found id: ""
	I1115 10:28:18.882883  661959 logs.go:282] 0 containers: []
	W1115 10:28:18.882891  661959 logs.go:284] No container was found matching "storage-provisioner"
	I1115 10:28:18.882902  661959 logs.go:123] Gathering logs for dmesg ...
	I1115 10:28:18.882914  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 10:28:18.901127  661959 logs.go:123] Gathering logs for describe nodes ...
	I1115 10:28:18.901157  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 10:28:18.972978  661959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 10:28:18.973000  661959 logs.go:123] Gathering logs for kube-apiserver [d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af] ...
	I1115 10:28:18.973015  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a18330a9ec7fd440491a84ebcafc7112271057d35e223e0572d65b7c3793af"
	I1115 10:28:19.005828  661959 logs.go:123] Gathering logs for kube-scheduler [9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1] ...
	I1115 10:28:19.005861  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9f304f7407d84c5850e653615f77d84499edc50a31bdf46f48cbd1cbda65bcb1"
	I1115 10:28:19.067952  661959 logs.go:123] Gathering logs for kube-controller-manager [035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e] ...
	I1115 10:28:19.067989  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 035737cf57dbfd4df2da80bab2eadb2fa735f2b4849b836e6c60bcfd11452c2e"
	I1115 10:28:19.098878  661959 logs.go:123] Gathering logs for CRI-O ...
	I1115 10:28:19.098905  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 10:28:19.168404  661959 logs.go:123] Gathering logs for container status ...
	I1115 10:28:19.168444  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 10:28:19.206317  661959 logs.go:123] Gathering logs for kubelet ...
	I1115 10:28:19.206344  661959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.938648765Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.938669261Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.941729592Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.941760983Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.941782743Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.944690315Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:28:07 pause-742370 crio[2248]: time="2025-11-15T10:28:07.944729157Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.131129247Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=c40765e2-fe55-4584-81f7-be735eef6254 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.132548047Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=4db44e6c-2707-4074-9722-3949db96756a name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.133505453Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-55cnz/coredns" id=86c5a479-ee73-41e4-8f30-bbcac2c5fbf7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.13366407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.141998899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.142550744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.15692819Z" level=info msg="Created container 05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f: kube-system/coredns-66bc5c9577-55cnz/coredns" id=86c5a479-ee73-41e4-8f30-bbcac2c5fbf7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.159398276Z" level=info msg="Starting container: 05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f" id=24462274-297e-4b41-bea3-6d0ef424c9d3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:28:16 pause-742370 crio[2248]: time="2025-11-15T10:28:16.161109261Z" level=info msg="Started container" PID=2800 containerID=05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f description=kube-system/coredns-66bc5c9577-55cnz/coredns id=24462274-297e-4b41-bea3-6d0ef424c9d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ace949a8f9f625561d0aabe18c374c1c203fdec1de6ffcaec59e9116ce9e5239
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.131194105Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=1adbfd46-783e-475f-9f9f-0811394aa3ee name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.13232342Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=6439f208-63ce-4ec1-9c21-14fed45bf8cd name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.13319643Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-pv4lm/coredns" id=77cd8584-afe6-4af7-aedf-ce4a0cd55171 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.133307516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.13901817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.139522485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.155382434Z" level=info msg="Created container 288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555: kube-system/coredns-66bc5c9577-pv4lm/coredns" id=77cd8584-afe6-4af7-aedf-ce4a0cd55171 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.156091953Z" level=info msg="Starting container: 288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555" id=33860eca-d923-4ac7-949e-3f4327a6e76d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:28:18 pause-742370 crio[2248]: time="2025-11-15T10:28:18.158673322Z" level=info msg="Started container" PID=2815 containerID=288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555 description=kube-system/coredns-66bc5c9577-pv4lm/coredns id=33860eca-d923-4ac7-949e-3f4327a6e76d name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dfce00123902b453aba45be17c3f97aa9c20a67a89daa6e27a68232f0dbaa26
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	288a80c19f3a3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   8 seconds ago       Running             coredns                   2                   3dfce00123902       coredns-66bc5c9577-pv4lm               kube-system
	05efb4a261f9f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   10 seconds ago      Running             coredns                   2                   ace949a8f9f62       coredns-66bc5c9577-55cnz               kube-system
	eca10abde8316       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   28 seconds ago      Running             kindnet-cni               2                   770b17e152e33       kindnet-9xgvp                          kube-system
	c8e184bde3a71       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   28 seconds ago      Running             kube-scheduler            2                   acb7140270d9f       kube-scheduler-pause-742370            kube-system
	310bc98f84eb9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   29 seconds ago      Running             etcd                      2                   3a97e5158b975       etcd-pause-742370                      kube-system
	66f491c30963d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   29 seconds ago      Running             kube-apiserver            2                   4b4850d31d09b       kube-apiserver-pause-742370            kube-system
	f87e275e82af8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   29 seconds ago      Running             kube-controller-manager   2                   8e40c96fddb3b       kube-controller-manager-pause-742370   kube-system
	ab0f0a81ef6ed       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   29 seconds ago      Running             kube-proxy                2                   88c316f5bc9cc       kube-proxy-mcjx7                       kube-system
	772520002a0d2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   42 seconds ago      Exited              kube-proxy                1                   88c316f5bc9cc       kube-proxy-mcjx7                       kube-system
	894151757a420       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   42 seconds ago      Exited              kube-scheduler            1                   acb7140270d9f       kube-scheduler-pause-742370            kube-system
	6e9864194912a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   42 seconds ago      Exited              coredns                   1                   ace949a8f9f62       coredns-66bc5c9577-55cnz               kube-system
	72a49dbd30345       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   42 seconds ago      Exited              kindnet-cni               1                   770b17e152e33       kindnet-9xgvp                          kube-system
	760e182248368       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   42 seconds ago      Exited              coredns                   1                   3dfce00123902       coredns-66bc5c9577-pv4lm               kube-system
	cfb4b2c9e1313       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   42 seconds ago      Exited              kube-controller-manager   1                   8e40c96fddb3b       kube-controller-manager-pause-742370   kube-system
	dcfbe82e3ea3a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   42 seconds ago      Exited              kube-apiserver            1                   4b4850d31d09b       kube-apiserver-pause-742370            kube-system
	5677ea4b60fed       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   42 seconds ago      Exited              etcd                      1                   3a97e5158b975       etcd-pause-742370                      kube-system
	
	
	==> coredns [05efb4a261f9ffb6ade693b76ee5c8629e04fcbcd40a119a4c3668888c62703f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59118 - 63155 "HINFO IN 936987760029837079.1719399592185128055. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012876851s
	
	
	==> coredns [288a80c19f3a3eab8f2c1a1a7bff122792c4ee22de1ba512aec5c58353c8e555] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39201 - 43462 "HINFO IN 3376852173061605213.5254767342976220100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003502748s
	
	
	==> coredns [6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:36694 - 34092 "HINFO IN 2088388766527242553.822144519811294297. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.036133941s
	
	
	==> coredns [760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47103 - 49832 "HINFO IN 7731657668072876780.3955365230317501202. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012771976s
	
	
	==> describe nodes <==
	Name:               pause-742370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-742370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=pause-742370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_26_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:26:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-742370
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:28:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:28:12 +0000   Sat, 15 Nov 2025 10:26:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:28:12 +0000   Sat, 15 Nov 2025 10:26:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:28:12 +0000   Sat, 15 Nov 2025 10:26:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:28:12 +0000   Sat, 15 Nov 2025 10:27:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-742370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                e10bddd9-6d57-4527-9e98-28976bb9c4d7
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-55cnz                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     94s
	  kube-system                 coredns-66bc5c9577-pv4lm                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     94s
	  kube-system                 etcd-pause-742370                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         99s
	  kube-system                 kindnet-9xgvp                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      95s
	  kube-system                 kube-apiserver-pause-742370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-pause-742370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-mcjx7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-scheduler-pause-742370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 94s                  kube-proxy       
	  Normal   Starting                 23s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node pause-742370 status is now: NodeHasSufficientMemory
	  Normal   Starting                 107s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 107s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node pause-742370 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s (x8 over 107s)  kubelet          Node pause-742370 status is now: NodeHasSufficientPID
	  Normal   Starting                 100s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 100s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  99s                  kubelet          Node pause-742370 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    99s                  kubelet          Node pause-742370 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     99s                  kubelet          Node pause-742370 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           96s                  node-controller  Node pause-742370 event: Registered Node pause-742370 in Controller
	  Normal   NodeReady                53s                  kubelet          Node pause-742370 status is now: NodeReady
	  Warning  ContainerGCFailed        40s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           20s                  node-controller  Node pause-742370 event: Registered Node pause-742370 in Controller
	
	
	==> dmesg <==
	[ +33.622137] overlayfs: idmapped layers are currently not supported
	[Nov15 10:01] overlayfs: idmapped layers are currently not supported
	[Nov15 10:02] overlayfs: idmapped layers are currently not supported
	[  +3.446621] overlayfs: idmapped layers are currently not supported
	[Nov15 10:03] overlayfs: idmapped layers are currently not supported
	[ +29.285636] overlayfs: idmapped layers are currently not supported
	[Nov15 10:05] overlayfs: idmapped layers are currently not supported
	[Nov15 10:09] overlayfs: idmapped layers are currently not supported
	[Nov15 10:10] overlayfs: idmapped layers are currently not supported
	[Nov15 10:11] overlayfs: idmapped layers are currently not supported
	[Nov15 10:12] overlayfs: idmapped layers are currently not supported
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [310bc98f84eb9a13685d56f25f40ae6ee2024ca6e91383fa77e34b73a5d1ccdd] <==
	{"level":"warn","ts":"2025-11-15T10:27:59.662086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.692551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.706517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.766526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.772883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.798745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.813752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.831925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.854057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.872480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.902774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.912976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.947286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.954798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:27:59.985226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.051011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.055264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.074640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.132278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.166788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.264887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.294708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.333063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.362462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:28:00.481470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41774","server-name":"","error":"EOF"}
	
	
	==> etcd [5677ea4b60fedf14359a3a400fb67e5d5e7b24d88130a022cc787f789cac5ddf] <==
	{"level":"info","ts":"2025-11-15T10:27:45.480200Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T10:27:45.500304Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-15T10:27:45.511796Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T10:27:45.511914Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-11-15T10:27:45.512721Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-15T10:27:45.512970Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-15T10:27:45.515958Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T10:27:45.743901Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T10:27:45.744014Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-742370","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-15T10:27:45.744322Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:27:45.748670Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T10:27:45.750456Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:45.750564Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-15T10:27:45.750796Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-15T10:27:45.750847Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-15T10:27:45.751039Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:27:45.751099Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:27:45.751143Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T10:27:45.751215Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T10:27:45.751252Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T10:27:45.751314Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:45.761430Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-15T10:27:45.761895Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T10:27:45.761968Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T10:27:45.762010Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-742370","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 10:28:26 up  5:10,  0 user,  load average: 1.99, 2.62, 2.34
	Linux pause-742370 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72a49dbd30345812d15352a78f21786b46bee08f9d96da2318bdfc3460699228] <==
	I1115 10:27:44.771547       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:27:44.798658       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:27:44.798820       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:27:44.798834       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:27:44.798846       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:27:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:27:44.993586       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:27:44.993695       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:27:44.993730       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:27:44.998743       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kindnet [eca10abde8316621413e98548b270d2b5740f5e6c6f2e387403a71e8813355f1] <==
	I1115 10:27:57.726446       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:27:57.726664       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:27:57.726818       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:27:57.726830       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:27:57.726840       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:27:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:27:58.015225       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:27:58.015331       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:27:58.015412       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:27:58.017315       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:28:02.016509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:28:02.016678       1 metrics.go:72] Registering metrics
	I1115 10:28:02.016801       1 controller.go:711] "Syncing nftables rules"
	I1115 10:28:07.931335       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:28:07.931395       1 main.go:301] handling current node
	I1115 10:28:17.927584       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:28:17.927618       1 main.go:301] handling current node
	
	
	==> kube-apiserver [66f491c30963d573277c592e9b2e25156e8a6d81a7cc25ce3ca261987f5ebf0e] <==
	I1115 10:28:01.863789       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:28:01.866343       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:28:01.866477       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:28:01.866564       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:28:01.881832       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:28:01.886156       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:28:01.886229       1 policy_source.go:240] refreshing policies
	I1115 10:28:01.896145       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:28:01.915199       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:28:01.915400       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:28:01.915516       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:28:01.915830       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:28:01.915961       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:28:01.916291       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:28:01.966561       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:28:01.968593       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:28:01.983223       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:28:02.009091       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:28:02.010537       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:28:02.510712       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:28:04.667212       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:28:06.211269       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:28:06.309922       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:28:06.408852       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:28:06.459369       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [dcfbe82e3ea3a44db46570ac44e74c82ad7b305d8adb4a6b172d8ecfc030a792] <==
	I1115 10:27:44.610575       1 options.go:263] external host was not specified, using 192.168.85.2
	I1115 10:27:44.613025       1 server.go:150] Version: v1.34.1
	I1115 10:27:44.613058       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [cfb4b2c9e131375c52820201cc2aa616b3aafdb5d46b65406a8b2f3f47825184] <==
	
	
	==> kube-controller-manager [f87e275e82af871d828ba41fd96a95ab60f2bc5453c125ca1e11b2196c628dfa] <==
	I1115 10:28:06.084591       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:28:06.089244       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:28:06.089366       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:28:06.089471       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:28:06.089521       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:28:06.089549       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:28:06.089577       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:28:06.097175       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:28:06.102594       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:28:06.102722       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:28:06.102770       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:28:06.102862       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:28:06.102905       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:28:06.103228       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:28:06.103277       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:28:06.103290       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:28:06.113294       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:28:06.134105       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:28:06.134193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:28:06.135293       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:28:06.135361       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:28:06.178770       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:28:06.221225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:28:06.221339       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:28:06.221383       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [772520002a0d2e732b637bacc7f3b9a571c747d1a191d27acbd9c780593f38eb] <==
	
	
	==> kube-proxy [ab0f0a81ef6eded6ffef0d3661cdcd2c942b9b2627c3c49c8cdd8a50142ef602] <==
	I1115 10:27:59.958830       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:28:01.354078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:28:02.589680       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:28:02.589718       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:28:02.589784       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:28:03.659872       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:28:03.659997       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:28:03.701077       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:28:03.702771       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:28:03.703512       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:28:03.704942       1 config.go:200] "Starting service config controller"
	I1115 10:28:03.705014       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:28:03.705106       1 config.go:309] "Starting node config controller"
	I1115 10:28:03.705149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:28:03.705182       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:28:03.705212       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:28:03.709064       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:28:03.705225       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:28:03.709162       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:28:03.805943       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:28:03.810172       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:28:03.810265       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [894151757a42063aa61216c10db7008aa192a0016f7f781aa5e705f8b3c03186] <==
	
	
	==> kube-scheduler [c8e184bde3a719ef981434aa1018993df941ba79e4a11831f1d812e69b3afee4] <==
	I1115 10:28:00.811305       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:28:03.811170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:28:03.811198       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:28:03.816044       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:28:03.816196       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:28:03.816154       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:28:03.816247       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:28:03.816176       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:28:03.816504       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:28:03.819697       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:28:03.819749       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:28:03.916371       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:28:03.916516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:28:03.916899       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:27:57 pause-742370 kubelet[1322]: E1115 10:27:57.543739    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-742370\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0c3dec5f461c73a098dc2e296a8d5a0b" pod="kube-system/kube-apiserver-pause-742370"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: E1115 10:27:57.543992    1322 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-55cnz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="23e8d4ed-4a7c-4411-a5ab-ecc48346820e" pod="kube-system/coredns-66bc5c9577-55cnz"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.573520    1322 scope.go:117] "RemoveContainer" containerID="ea4fb980a515e467e33d46395276a789a4ece55251fb99babf679dbbab5e61ca"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.641297    1322 scope.go:117] "RemoveContainer" containerID="4602e2b05d93357c275e91842c3b8c26bcc12dff12d5c91834779995adcb7294"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.670637    1322 scope.go:117] "RemoveContainer" containerID="f813afbe6ccd1ed2d7cea245b742dcfab91a1cdfaedbb9e40e4563ae9760c9f6"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.718766    1322 scope.go:117] "RemoveContainer" containerID="ebfe8165e01e1232a7622dd6f871d26bdca77a08246dd8c8c9e76236d35743e0"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.736626    1322 scope.go:117] "RemoveContainer" containerID="534661e04c63930425a4633a1e9e9ed45d5dfbe868b098444d1060b9a020af8f"
	Nov 15 10:27:57 pause-742370 kubelet[1322]: I1115 10:27:57.751595    1322 scope.go:117] "RemoveContainer" containerID="d3edb77378f704636b6b906c0602b5fbbc83ab4dea8ada6a5cd4185864948c6c"
	Nov 15 10:27:58 pause-742370 kubelet[1322]: I1115 10:27:58.557425    1322 scope.go:117] "RemoveContainer" containerID="760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	Nov 15 10:27:58 pause-742370 kubelet[1322]: E1115 10:27:58.558125    1322 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-pv4lm_kube-system(0743264a-d168-4c5f-ae23-90f5be7daea5)\"" pod="kube-system/coredns-66bc5c9577-pv4lm" podUID="0743264a-d168-4c5f-ae23-90f5be7daea5"
	Nov 15 10:27:58 pause-742370 kubelet[1322]: I1115 10:27:58.561897    1322 scope.go:117] "RemoveContainer" containerID="6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	Nov 15 10:27:58 pause-742370 kubelet[1322]: E1115 10:27:58.562227    1322 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-55cnz_kube-system(23e8d4ed-4a7c-4411-a5ab-ecc48346820e)\"" pod="kube-system/coredns-66bc5c9577-55cnz" podUID="23e8d4ed-4a7c-4411-a5ab-ecc48346820e"
	Nov 15 10:28:01 pause-742370 kubelet[1322]: E1115 10:28:01.886431    1322 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-742370\" is forbidden: User \"system:node:pause-742370\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-742370' and this object" podUID="f1d79d64c588692e8fc9a659e962a96e" pod="kube-system/etcd-pause-742370"
	Nov 15 10:28:03 pause-742370 kubelet[1322]: I1115 10:28:03.541871    1322 scope.go:117] "RemoveContainer" containerID="6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	Nov 15 10:28:03 pause-742370 kubelet[1322]: E1115 10:28:03.542539    1322 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-55cnz_kube-system(23e8d4ed-4a7c-4411-a5ab-ecc48346820e)\"" pod="kube-system/coredns-66bc5c9577-55cnz" podUID="23e8d4ed-4a7c-4411-a5ab-ecc48346820e"
	Nov 15 10:28:03 pause-742370 kubelet[1322]: I1115 10:28:03.546114    1322 scope.go:117] "RemoveContainer" containerID="760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	Nov 15 10:28:03 pause-742370 kubelet[1322]: E1115 10:28:03.546405    1322 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-pv4lm_kube-system(0743264a-d168-4c5f-ae23-90f5be7daea5)\"" pod="kube-system/coredns-66bc5c9577-pv4lm" podUID="0743264a-d168-4c5f-ae23-90f5be7daea5"
	Nov 15 10:28:07 pause-742370 kubelet[1322]: W1115 10:28:07.348860    1322 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 15 10:28:16 pause-742370 kubelet[1322]: I1115 10:28:16.130644    1322 scope.go:117] "RemoveContainer" containerID="6e9864194912a3a6844bc082342ac7b73a3fbe9c515c37e6846b822e4f19431e"
	Nov 15 10:28:17 pause-742370 kubelet[1322]: W1115 10:28:17.363149    1322 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 15 10:28:18 pause-742370 kubelet[1322]: I1115 10:28:18.130668    1322 scope.go:117] "RemoveContainer" containerID="760e182248368dc066e2ab56657d66e377d41745c331aba659b5858672c171dc"
	Nov 15 10:28:20 pause-742370 kubelet[1322]: I1115 10:28:20.518025    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-55cnz" podStartSLOduration=88.518009188 podStartE2EDuration="1m28.518009188s" podCreationTimestamp="2025-11-15 10:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:27:34.365720858 +0000 UTC m=+47.579682415" watchObservedRunningTime="2025-11-15 10:28:20.518009188 +0000 UTC m=+93.731970729"
	Nov 15 10:28:21 pause-742370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:28:21 pause-742370 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:28:21 pause-742370 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-742370 -n pause-742370
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-742370 -n pause-742370: exit status 2 (350.615485ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-742370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.916915ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:31:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-448285 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-448285 describe deploy/metrics-server -n kube-system: exit status 1 (79.75897ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-448285 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-448285
helpers_test.go:243: (dbg) docker inspect old-k8s-version-448285:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a",
	        "Created": "2025-11-15T10:30:52.988114549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 694261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:30:53.057361233Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/hosts",
	        "LogPath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a-json.log",
	        "Name": "/old-k8s-version-448285",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-448285:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-448285",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a",
	                "LowerDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-448285",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-448285/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-448285",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-448285",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-448285",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "221e07893a5f095b9d0373089cab2e2597acf37e57b35f35e44f63c64b8a38ed",
	            "SandboxKey": "/var/run/docker/netns/221e07893a5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33781"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33782"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-448285": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:54:ba:d6:b8:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12ca3f9e094ca466512267d8c860ff57126abdec6db67bd16d9375e8738c15d5",
	                    "EndpointID": "89b9417c9972020c167c5eb00c4d1c8683281876cd3cafa89dc86c7643f0ccbf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-448285",
	                        "8d49869cd1fd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-448285 -n old-k8s-version-448285
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-448285 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-448285 logs -n 25: (1.176581314s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-864099 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo containerd config dump                                                                                                                                                                                                  │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo crio config                                                                                                                                                                                                             │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ delete  │ -p cilium-864099                                                                                                                                                                                                                              │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-env-683299 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-683299  │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p kubernetes-upgrade-480353                                                                                                                                                                                                                  │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-845026    │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-683299                                                                                                                                                                                                                   │ force-systemd-env-683299  │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p cert-options-115480 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-115480 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-115480 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-115480                                                                                                                                                                                                                        │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:30:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:30:47.141263  693873 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:30:47.141481  693873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:30:47.141509  693873 out.go:374] Setting ErrFile to fd 2...
	I1115 10:30:47.141528  693873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:30:47.141847  693873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:30:47.142315  693873 out.go:368] Setting JSON to false
	I1115 10:30:47.143332  693873 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18799,"bootTime":1763183849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:30:47.143422  693873 start.go:143] virtualization:  
	I1115 10:30:47.149433  693873 out.go:179] * [old-k8s-version-448285] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:30:47.152988  693873 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:30:47.153145  693873 notify.go:221] Checking for updates...
	I1115 10:30:47.159682  693873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:30:47.162917  693873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:30:47.166675  693873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:30:47.169778  693873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:30:47.172877  693873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:30:47.176472  693873 config.go:182] Loaded profile config "cert-expiration-845026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:30:47.176621  693873 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:30:47.203270  693873 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:30:47.203452  693873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:30:47.260105  693873 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:30:47.250716231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:30:47.260217  693873 docker.go:319] overlay module found
	I1115 10:30:47.263301  693873 out.go:179] * Using the docker driver based on user configuration
	I1115 10:30:47.266295  693873 start.go:309] selected driver: docker
	I1115 10:30:47.266313  693873 start.go:930] validating driver "docker" against <nil>
	I1115 10:30:47.266327  693873 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:30:47.267103  693873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:30:47.319815  693873 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:30:47.310810872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:30:47.319970  693873 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:30:47.320215  693873 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:30:47.323274  693873 out.go:179] * Using Docker driver with root privileges
	I1115 10:30:47.326184  693873 cni.go:84] Creating CNI manager for ""
	I1115 10:30:47.326251  693873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:30:47.326264  693873 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:30:47.326347  693873 start.go:353] cluster config:
	{Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:30:47.329572  693873 out.go:179] * Starting "old-k8s-version-448285" primary control-plane node in "old-k8s-version-448285" cluster
	I1115 10:30:47.332433  693873 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:30:47.335380  693873 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:30:47.338199  693873 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:30:47.338246  693873 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 10:30:47.338258  693873 cache.go:65] Caching tarball of preloaded images
	I1115 10:30:47.338361  693873 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:30:47.338376  693873 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 10:30:47.338483  693873 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/config.json ...
	I1115 10:30:47.338507  693873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/config.json: {Name:mk9821e4d6a09e9e4c39f96f66c376cecae1d5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:30:47.338674  693873 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:30:47.359002  693873 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:30:47.359024  693873 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:30:47.359042  693873 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:30:47.359064  693873 start.go:360] acquireMachinesLock for old-k8s-version-448285: {Name:mk5fdf42c0c76187fa0952dcaa2e938d4fb739c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:30:47.359177  693873 start.go:364] duration metric: took 95.842µs to acquireMachinesLock for "old-k8s-version-448285"
	I1115 10:30:47.359208  693873 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:30:47.359287  693873 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:30:47.362766  693873 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:30:47.362984  693873 start.go:159] libmachine.API.Create for "old-k8s-version-448285" (driver="docker")
	I1115 10:30:47.363014  693873 client.go:173] LocalClient.Create starting
	I1115 10:30:47.363088  693873 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem
	I1115 10:30:47.363129  693873 main.go:143] libmachine: Decoding PEM data...
	I1115 10:30:47.363148  693873 main.go:143] libmachine: Parsing certificate...
	I1115 10:30:47.363206  693873 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem
	I1115 10:30:47.363227  693873 main.go:143] libmachine: Decoding PEM data...
	I1115 10:30:47.363240  693873 main.go:143] libmachine: Parsing certificate...
	I1115 10:30:47.363593  693873 cli_runner.go:164] Run: docker network inspect old-k8s-version-448285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:30:47.379423  693873 cli_runner.go:211] docker network inspect old-k8s-version-448285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:30:47.379525  693873 network_create.go:284] running [docker network inspect old-k8s-version-448285] to gather additional debugging logs...
	I1115 10:30:47.379548  693873 cli_runner.go:164] Run: docker network inspect old-k8s-version-448285
	W1115 10:30:47.397052  693873 cli_runner.go:211] docker network inspect old-k8s-version-448285 returned with exit code 1
	I1115 10:30:47.397080  693873 network_create.go:287] error running [docker network inspect old-k8s-version-448285]: docker network inspect old-k8s-version-448285: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-448285 not found
	I1115 10:30:47.397093  693873 network_create.go:289] output of [docker network inspect old-k8s-version-448285]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-448285 not found
	
	** /stderr **
	I1115 10:30:47.397207  693873 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:30:47.413686  693873 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-03fcaf6cb6bf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:21:e0:cf:fc:c1} reservation:<nil>}
	I1115 10:30:47.414027  693873 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a5248bd30780 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:00:a1:23:de:dd} reservation:<nil>}
	I1115 10:30:47.414376  693873 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae071823fd3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b9:7d:07:12:bf} reservation:<nil>}
	I1115 10:30:47.414564  693873 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4166d0b0086f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c6:02:72:79:4e:6d} reservation:<nil>}
	I1115 10:30:47.415024  693873 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b9910}
	I1115 10:30:47.415050  693873 network_create.go:124] attempt to create docker network old-k8s-version-448285 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:30:47.415114  693873 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-448285 old-k8s-version-448285
	I1115 10:30:47.480985  693873 network_create.go:108] docker network old-k8s-version-448285 192.168.85.0/24 created
	I1115 10:30:47.481018  693873 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-448285" container
	I1115 10:30:47.481118  693873 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:30:47.498158  693873 cli_runner.go:164] Run: docker volume create old-k8s-version-448285 --label name.minikube.sigs.k8s.io=old-k8s-version-448285 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:30:47.515588  693873 oci.go:103] Successfully created a docker volume old-k8s-version-448285
	I1115 10:30:47.515693  693873 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-448285-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-448285 --entrypoint /usr/bin/test -v old-k8s-version-448285:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:30:48.007043  693873 oci.go:107] Successfully prepared a docker volume old-k8s-version-448285
	I1115 10:30:48.007126  693873 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:30:48.007140  693873 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:30:48.007225  693873 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-448285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:30:52.915657  693873 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-448285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.90838957s)
	I1115 10:30:52.915705  693873 kic.go:203] duration metric: took 4.908560733s to extract preloaded images to volume ...
	W1115 10:30:52.915840  693873 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:30:52.915961  693873 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:30:52.973486  693873 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-448285 --name old-k8s-version-448285 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-448285 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-448285 --network old-k8s-version-448285 --ip 192.168.85.2 --volume old-k8s-version-448285:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:30:53.283657  693873 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Running}}
	I1115 10:30:53.303977  693873 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:30:53.327684  693873 cli_runner.go:164] Run: docker exec old-k8s-version-448285 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:30:53.377807  693873 oci.go:144] the created container "old-k8s-version-448285" has a running status.
	I1115 10:30:53.377837  693873 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa...
	I1115 10:30:54.168886  693873 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:30:54.191815  693873 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:30:54.209940  693873 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:30:54.209965  693873 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-448285 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:30:54.251135  693873 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:30:54.268826  693873 machine.go:94] provisionDockerMachine start ...
	I1115 10:30:54.268938  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:30:54.285846  693873 main.go:143] libmachine: Using SSH client type: native
	I1115 10:30:54.286187  693873 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33779 <nil> <nil>}
	I1115 10:30:54.286203  693873 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:30:54.286821  693873 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45818->127.0.0.1:33779: read: connection reset by peer
	I1115 10:30:57.437170  693873 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448285
	
	I1115 10:30:57.437196  693873 ubuntu.go:182] provisioning hostname "old-k8s-version-448285"
	I1115 10:30:57.437268  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:30:57.455119  693873 main.go:143] libmachine: Using SSH client type: native
	I1115 10:30:57.455439  693873 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33779 <nil> <nil>}
	I1115 10:30:57.455457  693873 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-448285 && echo "old-k8s-version-448285" | sudo tee /etc/hostname
	I1115 10:30:57.616618  693873 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448285
	
	I1115 10:30:57.616706  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:30:57.635153  693873 main.go:143] libmachine: Using SSH client type: native
	I1115 10:30:57.635474  693873 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33779 <nil> <nil>}
	I1115 10:30:57.635497  693873 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-448285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-448285/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-448285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:30:57.785784  693873 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:30:57.785813  693873 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:30:57.785841  693873 ubuntu.go:190] setting up certificates
	I1115 10:30:57.785851  693873 provision.go:84] configureAuth start
	I1115 10:30:57.785911  693873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-448285
	I1115 10:30:57.803196  693873 provision.go:143] copyHostCerts
	I1115 10:30:57.803276  693873 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:30:57.803289  693873 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:30:57.803365  693873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:30:57.803467  693873 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:30:57.803480  693873 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:30:57.803511  693873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:30:57.803569  693873 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:30:57.803612  693873 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:30:57.803646  693873 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:30:57.803712  693873 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-448285 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-448285]
	I1115 10:30:58.038210  693873 provision.go:177] copyRemoteCerts
	I1115 10:30:58.038278  693873 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:30:58.038334  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:30:58.071636  693873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33779 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:30:58.177087  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:30:58.194328  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:30:58.211713  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 10:30:58.228977  693873 provision.go:87] duration metric: took 443.112782ms to configureAuth
	I1115 10:30:58.229002  693873 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:30:58.229183  693873 config.go:182] Loaded profile config "old-k8s-version-448285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:30:58.229281  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:30:58.246555  693873 main.go:143] libmachine: Using SSH client type: native
	I1115 10:30:58.246862  693873 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33779 <nil> <nil>}
	I1115 10:30:58.246883  693873 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:30:58.502684  693873 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:30:58.502711  693873 machine.go:97] duration metric: took 4.233862559s to provisionDockerMachine
	I1115 10:30:58.502722  693873 client.go:176] duration metric: took 11.139695453s to LocalClient.Create
	I1115 10:30:58.502735  693873 start.go:167] duration metric: took 11.139752206s to libmachine.API.Create "old-k8s-version-448285"
	I1115 10:30:58.502742  693873 start.go:293] postStartSetup for "old-k8s-version-448285" (driver="docker")
	I1115 10:30:58.502752  693873 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:30:58.502837  693873 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:30:58.502883  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:30:58.519182  693873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33779 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:30:58.625442  693873 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:30:58.628800  693873 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:30:58.628836  693873 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:30:58.628847  693873 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:30:58.628899  693873 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:30:58.628993  693873 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:30:58.629116  693873 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:30:58.636489  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:30:58.655452  693873 start.go:296] duration metric: took 152.694025ms for postStartSetup
	I1115 10:30:58.655822  693873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-448285
	I1115 10:30:58.673313  693873 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/config.json ...
	I1115 10:30:58.673579  693873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:30:58.673668  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:30:58.690050  693873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33779 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:30:58.790523  693873 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:30:58.795926  693873 start.go:128] duration metric: took 11.436622584s to createHost
	I1115 10:30:58.795958  693873 start.go:83] releasing machines lock for "old-k8s-version-448285", held for 11.43676882s
	I1115 10:30:58.796065  693873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-448285
	I1115 10:30:58.812721  693873 ssh_runner.go:195] Run: cat /version.json
	I1115 10:30:58.812787  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:30:58.813045  693873 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:30:58.813106  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:30:58.834099  693873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33779 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:30:58.847311  693873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33779 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:30:58.949822  693873 ssh_runner.go:195] Run: systemctl --version
	I1115 10:30:59.045247  693873 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:30:59.081306  693873 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:30:59.086213  693873 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:30:59.086285  693873 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:30:59.114619  693873 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:30:59.114644  693873 start.go:496] detecting cgroup driver to use...
	I1115 10:30:59.114677  693873 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:30:59.114726  693873 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:30:59.132800  693873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:30:59.145681  693873 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:30:59.145775  693873 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:30:59.166573  693873 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:30:59.185473  693873 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:30:59.311882  693873 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:30:59.436602  693873 docker.go:234] disabling docker service ...
	I1115 10:30:59.436712  693873 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:30:59.460576  693873 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:30:59.474369  693873 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:30:59.605959  693873 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:30:59.734413  693873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:30:59.748091  693873 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:30:59.763263  693873 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1115 10:30:59.763386  693873 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:30:59.775954  693873 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:30:59.776026  693873 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:30:59.789319  693873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:30:59.798597  693873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:30:59.808074  693873 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:30:59.816831  693873 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:30:59.826504  693873 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:30:59.841984  693873 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:30:59.850840  693873 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:30:59.859203  693873 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:30:59.867285  693873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:30:59.989185  693873 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:31:00.388166  693873 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:31:00.388292  693873 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:31:00.393286  693873 start.go:564] Will wait 60s for crictl version
	I1115 10:31:00.393401  693873 ssh_runner.go:195] Run: which crictl
	I1115 10:31:00.399323  693873 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:31:00.428234  693873 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:31:00.428373  693873 ssh_runner.go:195] Run: crio --version
	I1115 10:31:00.462260  693873 ssh_runner.go:195] Run: crio --version
	I1115 10:31:00.500738  693873 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1115 10:31:00.503836  693873 cli_runner.go:164] Run: docker network inspect old-k8s-version-448285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:31:00.523085  693873 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:31:00.527597  693873 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:31:00.538693  693873 kubeadm.go:884] updating cluster {Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:31:00.538817  693873 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:31:00.538885  693873 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:31:00.575933  693873 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:31:00.575958  693873 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:31:00.576014  693873 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:31:00.617920  693873 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:31:00.617946  693873 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:31:00.617955  693873 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1115 10:31:00.618042  693873 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-448285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:31:00.618144  693873 ssh_runner.go:195] Run: crio config
	I1115 10:31:00.674523  693873 cni.go:84] Creating CNI manager for ""
	I1115 10:31:00.674551  693873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:31:00.674573  693873 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:31:00.674601  693873 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-448285 NodeName:old-k8s-version-448285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:31:00.674743  693873 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-448285"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:31:00.675089  693873 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1115 10:31:00.687907  693873 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:31:00.688004  693873 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:31:00.695583  693873 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1115 10:31:00.709687  693873 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:31:00.722945  693873 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1115 10:31:00.736498  693873 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:31:00.740053  693873 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:31:00.749615  693873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:31:00.863499  693873 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:31:00.879728  693873 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285 for IP: 192.168.85.2
	I1115 10:31:00.879759  693873 certs.go:195] generating shared ca certs ...
	I1115 10:31:00.879778  693873 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:00.879962  693873 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:31:00.880021  693873 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:31:00.880043  693873 certs.go:257] generating profile certs ...
	I1115 10:31:00.880130  693873 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.key
	I1115 10:31:00.880147  693873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt with IP's: []
	I1115 10:31:01.197630  693873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt ...
	I1115 10:31:01.197659  693873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: {Name:mk0b0c097e35e5a710df8a96d410ce415a09cd44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:01.197863  693873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.key ...
	I1115 10:31:01.197879  693873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.key: {Name:mk8ea793cfb499701f80e51f56761b9d14252b96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:01.197980  693873 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key.28437dd2
	I1115 10:31:01.198001  693873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.crt.28437dd2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 10:31:02.027836  693873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.crt.28437dd2 ...
	I1115 10:31:02.027869  693873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.crt.28437dd2: {Name:mkacb38407f087540e9708e68334378cf3e4f434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:02.028117  693873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key.28437dd2 ...
	I1115 10:31:02.028142  693873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key.28437dd2: {Name:mkdc310292cf2e585455b9d61d324251b5fa3ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:02.028232  693873 certs.go:382] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.crt.28437dd2 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.crt
	I1115 10:31:02.028325  693873 certs.go:386] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key.28437dd2 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key
	I1115 10:31:02.028394  693873 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.key
	I1115 10:31:02.028412  693873 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.crt with IP's: []
	I1115 10:31:02.110291  693873 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.crt ...
	I1115 10:31:02.110319  693873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.crt: {Name:mk1469e3d698934215eb59f1f5c80682d2ede53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:02.110511  693873 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.key ...
	I1115 10:31:02.110525  693873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.key: {Name:mkd3c88f1a4bfecea580c95abcee4afb977c27b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:02.110718  693873 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:31:02.110762  693873 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:31:02.110775  693873 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:31:02.110802  693873 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:31:02.110829  693873 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:31:02.110853  693873 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:31:02.110897  693873 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:31:02.111461  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:31:02.131644  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:31:02.153578  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:31:02.171809  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:31:02.190420  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 10:31:02.211634  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:31:02.232533  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:31:02.251581  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:31:02.269761  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:31:02.288369  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:31:02.306449  693873 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:31:02.325282  693873 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:31:02.339946  693873 ssh_runner.go:195] Run: openssl version
	I1115 10:31:02.347996  693873 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:31:02.359455  693873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:31:02.363812  693873 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:31:02.363892  693873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:31:02.407135  693873 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:31:02.415664  693873 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:31:02.423976  693873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:31:02.428041  693873 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:31:02.428139  693873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:31:02.469572  693873 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:31:02.478469  693873 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:31:02.486875  693873 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:31:02.490956  693873 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:31:02.491048  693873 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:31:02.535492  693873 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:31:02.544311  693873 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:31:02.547656  693873 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:31:02.547725  693873 kubeadm.go:401] StartCluster: {Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:31:02.547817  693873 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:31:02.547890  693873 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:31:02.578189  693873 cri.go:89] found id: ""
	I1115 10:31:02.578305  693873 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:31:02.587168  693873 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:31:02.595463  693873 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:31:02.595586  693873 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:31:02.604163  693873 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:31:02.604184  693873 kubeadm.go:158] found existing configuration files:
	
	I1115 10:31:02.604249  693873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:31:02.612817  693873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:31:02.612926  693873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:31:02.620544  693873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:31:02.628303  693873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:31:02.628409  693873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:31:02.635805  693873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:31:02.643531  693873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:31:02.643621  693873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:31:02.651072  693873 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:31:02.659489  693873 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:31:02.659585  693873 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:31:02.666952  693873 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:31:02.756758  693873 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:31:02.838907  693873 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:31:19.191852  693873 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1115 10:31:19.191921  693873 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:31:19.192019  693873 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:31:19.192116  693873 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:31:19.192158  693873 kubeadm.go:319] OS: Linux
	I1115 10:31:19.192209  693873 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:31:19.192270  693873 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:31:19.192327  693873 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:31:19.192382  693873 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:31:19.192433  693873 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:31:19.192482  693873 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:31:19.192536  693873 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:31:19.192591  693873 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:31:19.192640  693873 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:31:19.192722  693873 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:31:19.192821  693873 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:31:19.192934  693873 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1115 10:31:19.193000  693873 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:31:19.195912  693873 out.go:252]   - Generating certificates and keys ...
	I1115 10:31:19.196020  693873 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:31:19.196122  693873 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:31:19.196209  693873 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:31:19.196274  693873 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:31:19.196334  693873 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:31:19.196396  693873 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:31:19.196455  693873 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:31:19.196594  693873 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-448285] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:31:19.196646  693873 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:31:19.196770  693873 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-448285] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:31:19.196835  693873 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:31:19.196902  693873 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:31:19.196946  693873 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:31:19.197006  693873 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:31:19.197056  693873 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:31:19.197108  693873 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:31:19.197182  693873 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:31:19.197240  693873 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:31:19.197328  693873 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:31:19.197396  693873 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:31:19.200447  693873 out.go:252]   - Booting up control plane ...
	I1115 10:31:19.200551  693873 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:31:19.200646  693873 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:31:19.200716  693873 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:31:19.200842  693873 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:31:19.200942  693873 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:31:19.200983  693873 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:31:19.201167  693873 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1115 10:31:19.201247  693873 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.004725 seconds
	I1115 10:31:19.201375  693873 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:31:19.201526  693873 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:31:19.201626  693873 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:31:19.201845  693873 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-448285 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:31:19.201906  693873 kubeadm.go:319] [bootstrap-token] Using token: cn83w2.r1nnpruqmu8v97ce
	I1115 10:31:19.204846  693873 out.go:252]   - Configuring RBAC rules ...
	I1115 10:31:19.205072  693873 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:31:19.205175  693873 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:31:19.205340  693873 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:31:19.205483  693873 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:31:19.205650  693873 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:31:19.205754  693873 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:31:19.205877  693873 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:31:19.205922  693873 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:31:19.205971  693873 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:31:19.205975  693873 kubeadm.go:319] 
	I1115 10:31:19.206049  693873 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:31:19.206053  693873 kubeadm.go:319] 
	I1115 10:31:19.206142  693873 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:31:19.206147  693873 kubeadm.go:319] 
	I1115 10:31:19.206174  693873 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:31:19.206247  693873 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:31:19.206301  693873 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:31:19.206306  693873 kubeadm.go:319] 
	I1115 10:31:19.206370  693873 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:31:19.206375  693873 kubeadm.go:319] 
	I1115 10:31:19.206432  693873 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:31:19.206437  693873 kubeadm.go:319] 
	I1115 10:31:19.206498  693873 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:31:19.206579  693873 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:31:19.206652  693873 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:31:19.206657  693873 kubeadm.go:319] 
	I1115 10:31:19.206755  693873 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:31:19.206843  693873 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:31:19.206848  693873 kubeadm.go:319] 
	I1115 10:31:19.206944  693873 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cn83w2.r1nnpruqmu8v97ce \
	I1115 10:31:19.207061  693873 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 10:31:19.207083  693873 kubeadm.go:319] 	--control-plane 
	I1115 10:31:19.207088  693873 kubeadm.go:319] 
	I1115 10:31:19.207186  693873 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:31:19.207190  693873 kubeadm.go:319] 
	I1115 10:31:19.207287  693873 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cn83w2.r1nnpruqmu8v97ce \
	I1115 10:31:19.207416  693873 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 10:31:19.207424  693873 cni.go:84] Creating CNI manager for ""
	I1115 10:31:19.207432  693873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:31:19.210675  693873 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:31:19.213656  693873 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:31:19.221738  693873 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1115 10:31:19.221823  693873 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:31:19.241788  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:31:20.127208  693873 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:31:20.127359  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:20.127430  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-448285 minikube.k8s.io/updated_at=2025_11_15T10_31_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=old-k8s-version-448285 minikube.k8s.io/primary=true
	I1115 10:31:20.139236  693873 ops.go:34] apiserver oom_adj: -16
	I1115 10:31:20.316703  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:20.817774  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:21.317135  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:21.817364  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:22.317239  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:22.816781  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:23.317497  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:23.816787  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:24.317230  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:24.817375  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:25.317681  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:25.817065  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:26.317529  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:26.817680  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:27.317636  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:27.816816  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:28.317092  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:28.816857  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:29.316819  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:29.817692  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:30.317515  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:30.817282  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:31.317374  693873 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:31:31.430037  693873 kubeadm.go:1114] duration metric: took 11.302744599s to wait for elevateKubeSystemPrivileges
	I1115 10:31:31.430067  693873 kubeadm.go:403] duration metric: took 28.882366167s to StartCluster
	I1115 10:31:31.430085  693873 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:31.430146  693873 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:31:31.431189  693873 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:31:31.432184  693873 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:31:31.432195  693873 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:31:31.432482  693873 config.go:182] Loaded profile config "old-k8s-version-448285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:31:31.432519  693873 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:31:31.432575  693873 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-448285"
	I1115 10:31:31.432589  693873 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-448285"
	I1115 10:31:31.432611  693873 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:31:31.433100  693873 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:31:31.433668  693873 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-448285"
	I1115 10:31:31.433689  693873 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-448285"
	I1115 10:31:31.433964  693873 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:31:31.438975  693873 out.go:179] * Verifying Kubernetes components...
	I1115 10:31:31.441871  693873 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:31:31.463084  693873 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:31:31.469908  693873 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:31:31.469939  693873 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:31:31.470014  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:31:31.482751  693873 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-448285"
	I1115 10:31:31.482793  693873 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:31:31.483204  693873 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:31:31.510041  693873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33779 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:31:31.522264  693873 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:31:31.522285  693873 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:31:31.522346  693873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:31:31.551624  693873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33779 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:31:31.882816  693873 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:31:31.882971  693873 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:31:31.928129  693873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:31:31.932241  693873 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:31:32.801219  693873 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-448285" to be "Ready" ...
	I1115 10:31:32.801626  693873 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 10:31:33.128371  693873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.200207924s)
	I1115 10:31:33.128476  693873 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196214311s)
	I1115 10:31:33.142472  693873 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:31:33.145356  693873 addons.go:515] duration metric: took 1.712811896s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:31:33.307558  693873 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-448285" context rescaled to 1 replicas
	W1115 10:31:34.804666  693873 node_ready.go:57] node "old-k8s-version-448285" has "Ready":"False" status (will retry)
	W1115 10:31:37.305016  693873 node_ready.go:57] node "old-k8s-version-448285" has "Ready":"False" status (will retry)
	W1115 10:31:39.305086  693873 node_ready.go:57] node "old-k8s-version-448285" has "Ready":"False" status (will retry)
	W1115 10:31:41.805283  693873 node_ready.go:57] node "old-k8s-version-448285" has "Ready":"False" status (will retry)
	W1115 10:31:44.304949  693873 node_ready.go:57] node "old-k8s-version-448285" has "Ready":"False" status (will retry)
	I1115 10:31:45.807409  693873 node_ready.go:49] node "old-k8s-version-448285" is "Ready"
	I1115 10:31:45.807441  693873 node_ready.go:38] duration metric: took 13.006145294s for node "old-k8s-version-448285" to be "Ready" ...
	I1115 10:31:45.807456  693873 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:31:45.807513  693873 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:31:45.831317  693873 api_server.go:72] duration metric: took 14.399090497s to wait for apiserver process to appear ...
	I1115 10:31:45.831344  693873 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:31:45.831363  693873 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:31:45.842630  693873 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 10:31:45.845192  693873 api_server.go:141] control plane version: v1.28.0
	I1115 10:31:45.845221  693873 api_server.go:131] duration metric: took 13.870584ms to wait for apiserver health ...
	I1115 10:31:45.845230  693873 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:31:45.854000  693873 system_pods.go:59] 8 kube-system pods found
	I1115 10:31:45.854030  693873 system_pods.go:61] "coredns-5dd5756b68-6rz72" [1b9cd2bc-b240-497e-8cd9-6ebb31c76230] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:31:45.854038  693873 system_pods.go:61] "etcd-old-k8s-version-448285" [e0971773-43bd-4354-9a09-4b3423d890e7] Running
	I1115 10:31:45.854043  693873 system_pods.go:61] "kindnet-4sxqn" [15858d1d-82c8-4f57-b984-24a45188650c] Running
	I1115 10:31:45.854049  693873 system_pods.go:61] "kube-apiserver-old-k8s-version-448285" [79da2b0a-4965-4f39-b9a1-435376166c2b] Running
	I1115 10:31:45.854054  693873 system_pods.go:61] "kube-controller-manager-old-k8s-version-448285" [676943af-e821-4997-9130-bf6cce8685ef] Running
	I1115 10:31:45.854058  693873 system_pods.go:61] "kube-proxy-5pzbj" [5a143b70-c11c-48d7-8cc3-9881bdd32a70] Running
	I1115 10:31:45.854062  693873 system_pods.go:61] "kube-scheduler-old-k8s-version-448285" [b045298c-16ca-418e-8622-fe1ab709a966] Running
	I1115 10:31:45.854070  693873 system_pods.go:61] "storage-provisioner" [ecc31eb8-2cae-47f4-9c85-8dbc48b1d546] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:31:45.854075  693873 system_pods.go:74] duration metric: took 8.829024ms to wait for pod list to return data ...
	I1115 10:31:45.854084  693873 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:31:45.857260  693873 default_sa.go:45] found service account: "default"
	I1115 10:31:45.857281  693873 default_sa.go:55] duration metric: took 3.191592ms for default service account to be created ...
	I1115 10:31:45.857290  693873 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:31:45.863592  693873 system_pods.go:86] 8 kube-system pods found
	I1115 10:31:45.863685  693873 system_pods.go:89] "coredns-5dd5756b68-6rz72" [1b9cd2bc-b240-497e-8cd9-6ebb31c76230] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:31:45.863708  693873 system_pods.go:89] "etcd-old-k8s-version-448285" [e0971773-43bd-4354-9a09-4b3423d890e7] Running
	I1115 10:31:45.863747  693873 system_pods.go:89] "kindnet-4sxqn" [15858d1d-82c8-4f57-b984-24a45188650c] Running
	I1115 10:31:45.863776  693873 system_pods.go:89] "kube-apiserver-old-k8s-version-448285" [79da2b0a-4965-4f39-b9a1-435376166c2b] Running
	I1115 10:31:45.863799  693873 system_pods.go:89] "kube-controller-manager-old-k8s-version-448285" [676943af-e821-4997-9130-bf6cce8685ef] Running
	I1115 10:31:45.863834  693873 system_pods.go:89] "kube-proxy-5pzbj" [5a143b70-c11c-48d7-8cc3-9881bdd32a70] Running
	I1115 10:31:45.863865  693873 system_pods.go:89] "kube-scheduler-old-k8s-version-448285" [b045298c-16ca-418e-8622-fe1ab709a966] Running
	I1115 10:31:45.863886  693873 system_pods.go:89] "storage-provisioner" [ecc31eb8-2cae-47f4-9c85-8dbc48b1d546] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:31:45.863945  693873 retry.go:31] will retry after 213.018025ms: missing components: kube-dns
	I1115 10:31:46.081504  693873 system_pods.go:86] 8 kube-system pods found
	I1115 10:31:46.081544  693873 system_pods.go:89] "coredns-5dd5756b68-6rz72" [1b9cd2bc-b240-497e-8cd9-6ebb31c76230] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:31:46.081550  693873 system_pods.go:89] "etcd-old-k8s-version-448285" [e0971773-43bd-4354-9a09-4b3423d890e7] Running
	I1115 10:31:46.081558  693873 system_pods.go:89] "kindnet-4sxqn" [15858d1d-82c8-4f57-b984-24a45188650c] Running
	I1115 10:31:46.081563  693873 system_pods.go:89] "kube-apiserver-old-k8s-version-448285" [79da2b0a-4965-4f39-b9a1-435376166c2b] Running
	I1115 10:31:46.081568  693873 system_pods.go:89] "kube-controller-manager-old-k8s-version-448285" [676943af-e821-4997-9130-bf6cce8685ef] Running
	I1115 10:31:46.081572  693873 system_pods.go:89] "kube-proxy-5pzbj" [5a143b70-c11c-48d7-8cc3-9881bdd32a70] Running
	I1115 10:31:46.081576  693873 system_pods.go:89] "kube-scheduler-old-k8s-version-448285" [b045298c-16ca-418e-8622-fe1ab709a966] Running
	I1115 10:31:46.081582  693873 system_pods.go:89] "storage-provisioner" [ecc31eb8-2cae-47f4-9c85-8dbc48b1d546] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:31:46.081630  693873 retry.go:31] will retry after 245.200269ms: missing components: kube-dns
	I1115 10:31:46.331601  693873 system_pods.go:86] 8 kube-system pods found
	I1115 10:31:46.331631  693873 system_pods.go:89] "coredns-5dd5756b68-6rz72" [1b9cd2bc-b240-497e-8cd9-6ebb31c76230] Running
	I1115 10:31:46.331637  693873 system_pods.go:89] "etcd-old-k8s-version-448285" [e0971773-43bd-4354-9a09-4b3423d890e7] Running
	I1115 10:31:46.331641  693873 system_pods.go:89] "kindnet-4sxqn" [15858d1d-82c8-4f57-b984-24a45188650c] Running
	I1115 10:31:46.331645  693873 system_pods.go:89] "kube-apiserver-old-k8s-version-448285" [79da2b0a-4965-4f39-b9a1-435376166c2b] Running
	I1115 10:31:46.331652  693873 system_pods.go:89] "kube-controller-manager-old-k8s-version-448285" [676943af-e821-4997-9130-bf6cce8685ef] Running
	I1115 10:31:46.331656  693873 system_pods.go:89] "kube-proxy-5pzbj" [5a143b70-c11c-48d7-8cc3-9881bdd32a70] Running
	I1115 10:31:46.331660  693873 system_pods.go:89] "kube-scheduler-old-k8s-version-448285" [b045298c-16ca-418e-8622-fe1ab709a966] Running
	I1115 10:31:46.331664  693873 system_pods.go:89] "storage-provisioner" [ecc31eb8-2cae-47f4-9c85-8dbc48b1d546] Running
	I1115 10:31:46.331672  693873 system_pods.go:126] duration metric: took 474.376251ms to wait for k8s-apps to be running ...
	I1115 10:31:46.331680  693873 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:31:46.331755  693873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:31:46.345879  693873 system_svc.go:56] duration metric: took 14.170842ms WaitForService to wait for kubelet
	I1115 10:31:46.345904  693873 kubeadm.go:587] duration metric: took 14.913682467s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:31:46.345922  693873 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:31:46.350474  693873 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:31:46.350510  693873 node_conditions.go:123] node cpu capacity is 2
	I1115 10:31:46.350524  693873 node_conditions.go:105] duration metric: took 4.596673ms to run NodePressure ...
	I1115 10:31:46.350536  693873 start.go:242] waiting for startup goroutines ...
	I1115 10:31:46.350545  693873 start.go:247] waiting for cluster config update ...
	I1115 10:31:46.350556  693873 start.go:256] writing updated cluster config ...
	I1115 10:31:46.350836  693873 ssh_runner.go:195] Run: rm -f paused
	I1115 10:31:46.354461  693873 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:31:46.359751  693873 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6rz72" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:46.364592  693873 pod_ready.go:94] pod "coredns-5dd5756b68-6rz72" is "Ready"
	I1115 10:31:46.364616  693873 pod_ready.go:86] duration metric: took 4.839325ms for pod "coredns-5dd5756b68-6rz72" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:46.367663  693873 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:46.372443  693873 pod_ready.go:94] pod "etcd-old-k8s-version-448285" is "Ready"
	I1115 10:31:46.372475  693873 pod_ready.go:86] duration metric: took 4.782637ms for pod "etcd-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:46.375360  693873 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:46.379799  693873 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-448285" is "Ready"
	I1115 10:31:46.379824  693873 pod_ready.go:86] duration metric: took 4.440632ms for pod "kube-apiserver-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:46.382653  693873 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:46.759272  693873 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-448285" is "Ready"
	I1115 10:31:46.759296  693873 pod_ready.go:86] duration metric: took 376.591948ms for pod "kube-controller-manager-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:46.959020  693873 pod_ready.go:83] waiting for pod "kube-proxy-5pzbj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:47.359380  693873 pod_ready.go:94] pod "kube-proxy-5pzbj" is "Ready"
	I1115 10:31:47.359407  693873 pod_ready.go:86] duration metric: took 400.360465ms for pod "kube-proxy-5pzbj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:47.559150  693873 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:47.959209  693873 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-448285" is "Ready"
	I1115 10:31:47.959239  693873 pod_ready.go:86] duration metric: took 400.060789ms for pod "kube-scheduler-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:31:47.959253  693873 pod_ready.go:40] duration metric: took 1.604710218s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:31:48.023214  693873 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1115 10:31:48.026110  693873 out.go:203] 
	W1115 10:31:48.028860  693873 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 10:31:48.031687  693873 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:31:48.035400  693873 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-448285" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:31:45 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:45.800165944Z" level=info msg="Created container a05a0d0ac9fe88f8722533f23e7be9b3dd6755916c62b5471be75dc4f1a085a5: kube-system/coredns-5dd5756b68-6rz72/coredns" id=a8b31d28-d0b9-4044-a004-930c270fa8f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:31:45 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:45.800932634Z" level=info msg="Starting container: a05a0d0ac9fe88f8722533f23e7be9b3dd6755916c62b5471be75dc4f1a085a5" id=9a1adf53-4492-4419-b425-ac5c02ba1558 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:31:45 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:45.802737696Z" level=info msg="Started container" PID=1932 containerID=a05a0d0ac9fe88f8722533f23e7be9b3dd6755916c62b5471be75dc4f1a085a5 description=kube-system/coredns-5dd5756b68-6rz72/coredns id=9a1adf53-4492-4419-b425-ac5c02ba1558 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed61444d04936dc67520c3cc28a9d97cf97109fe144c05159ddef58f79d74ed5
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.54400853Z" level=info msg="Running pod sandbox: default/busybox/POD" id=828069fc-439f-41df-b7ff-4258e8c001a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.544078247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.549266445Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7b2d4b9ec4a139680aa7654ad905ad1af1a0ab56d0e30a81b1ec3e71cbc42e59 UID:db53b178-99fd-42b5-b5fc-37264803a8a3 NetNS:/var/run/netns/0b22b9e4-c78e-45eb-82f2-70748412feb8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000795b0}] Aliases:map[]}"
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.549418884Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.561383435Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7b2d4b9ec4a139680aa7654ad905ad1af1a0ab56d0e30a81b1ec3e71cbc42e59 UID:db53b178-99fd-42b5-b5fc-37264803a8a3 NetNS:/var/run/netns/0b22b9e4-c78e-45eb-82f2-70748412feb8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000795b0}] Aliases:map[]}"
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.561834894Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.565904467Z" level=info msg="Ran pod sandbox 7b2d4b9ec4a139680aa7654ad905ad1af1a0ab56d0e30a81b1ec3e71cbc42e59 with infra container: default/busybox/POD" id=828069fc-439f-41df-b7ff-4258e8c001a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.567431572Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ada17a31-4c8c-4b07-afe6-0a82eee57ae4 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.56760554Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ada17a31-4c8c-4b07-afe6-0a82eee57ae4 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.567657781Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ada17a31-4c8c-4b07-afe6-0a82eee57ae4 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.569176361Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a0e7392a-10f9-4ee2-b83d-fd6a358e316d name=/runtime.v1.ImageService/PullImage
	Nov 15 10:31:48 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:48.572083751Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:31:50 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:50.58541454Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a0e7392a-10f9-4ee2-b83d-fd6a358e316d name=/runtime.v1.ImageService/PullImage
	Nov 15 10:31:50 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:50.586682517Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=177b03b8-37a7-4381-9512-a86e55396a1b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:31:50 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:50.589013234Z" level=info msg="Creating container: default/busybox/busybox" id=9a914eda-94d0-41cf-9fdd-9907e55d1c35 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:31:50 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:50.589133551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:31:50 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:50.608992088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:31:50 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:50.609505993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:31:50 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:50.628811142Z" level=info msg="Created container 93b38977886a914b24def9d16a37c87d1652b7b0e58bb0e8bd7cd2e4f15c275f: default/busybox/busybox" id=9a914eda-94d0-41cf-9fdd-9907e55d1c35 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:31:50 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:50.631408708Z" level=info msg="Starting container: 93b38977886a914b24def9d16a37c87d1652b7b0e58bb0e8bd7cd2e4f15c275f" id=c6b69178-7226-4c05-aa33-905563dd9461 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:31:50 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:50.633728652Z" level=info msg="Started container" PID=1991 containerID=93b38977886a914b24def9d16a37c87d1652b7b0e58bb0e8bd7cd2e4f15c275f description=default/busybox/busybox id=c6b69178-7226-4c05-aa33-905563dd9461 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b2d4b9ec4a139680aa7654ad905ad1af1a0ab56d0e30a81b1ec3e71cbc42e59
	Nov 15 10:31:56 old-k8s-version-448285 crio[843]: time="2025-11-15T10:31:56.465064482Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	93b38977886a9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   7b2d4b9ec4a13       busybox                                          default
	a05a0d0ac9fe8       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   ed61444d04936       coredns-5dd5756b68-6rz72                         kube-system
	40509ea48a6fc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   3853fed03bc39       storage-provisioner                              kube-system
	886d886b075a2       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   70fbd6376f8bc       kindnet-4sxqn                                    kube-system
	9b3a9ecac22e8       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      25 seconds ago      Running             kube-proxy                0                   97c7d6243d035       kube-proxy-5pzbj                                 kube-system
	cca3703b00d0f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      46 seconds ago      Running             etcd                      0                   b031cba57f612       etcd-old-k8s-version-448285                      kube-system
	7334d370cd814       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      46 seconds ago      Running             kube-apiserver            0                   7f072f4c03892       kube-apiserver-old-k8s-version-448285            kube-system
	f762e6b560089       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      46 seconds ago      Running             kube-scheduler            0                   d33d011f5c7a7       kube-scheduler-old-k8s-version-448285            kube-system
	31043fdf82084       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      46 seconds ago      Running             kube-controller-manager   0                   93f91c46ed629       kube-controller-manager-old-k8s-version-448285   kube-system
	
	
	==> coredns [a05a0d0ac9fe88f8722533f23e7be9b3dd6755916c62b5471be75dc4f1a085a5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50025 - 2077 "HINFO IN 7347638015347911249.3932346444199019478. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004294363s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-448285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-448285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=old-k8s-version-448285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_31_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:31:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-448285
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:31:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:31:50 +0000   Sat, 15 Nov 2025 10:31:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:31:50 +0000   Sat, 15 Nov 2025 10:31:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:31:50 +0000   Sat, 15 Nov 2025 10:31:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:31:50 +0000   Sat, 15 Nov 2025 10:31:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-448285
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                992b5604-a676-4f5a-a947-58bb000cddf9
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-6rz72                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-448285                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-4sxqn                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-448285             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-448285    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-5pzbj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-448285             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-448285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-448285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-448285 event: Registered Node old-k8s-version-448285 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-448285 status is now: NodeReady
	
	
	==> dmesg <==
	[ +29.285636] overlayfs: idmapped layers are currently not supported
	[Nov15 10:05] overlayfs: idmapped layers are currently not supported
	[Nov15 10:09] overlayfs: idmapped layers are currently not supported
	[Nov15 10:10] overlayfs: idmapped layers are currently not supported
	[Nov15 10:11] overlayfs: idmapped layers are currently not supported
	[Nov15 10:12] overlayfs: idmapped layers are currently not supported
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cca3703b00d0fd01b02de6626f659b30bdd5226f8ca3df2335050e3c6f876aed] <==
	{"level":"info","ts":"2025-11-15T10:31:12.089552Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T10:31:12.087998Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T10:31:12.089191Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T10:31:12.089415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-15T10:31:12.089954Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-15T10:31:12.089704Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T10:31:12.089777Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T10:31:12.517191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-15T10:31:12.517311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-15T10:31:12.517357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-15T10:31:12.517394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-15T10:31:12.51743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T10:31:12.517472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-15T10:31:12.517506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T10:31:12.521721Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:31:12.522838Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-448285 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T10:31:12.522899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:31:12.525916Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T10:31:12.529848Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:31:12.530815Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-15T10:31:12.537711Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T10:31:12.537771Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T10:31:12.549767Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:31:12.549934Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:31:12.550118Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 10:31:58 up  5:14,  0 user,  load average: 2.63, 3.40, 2.77
	Linux old-k8s-version-448285 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [886d886b075a23fc674320394694f0aa0a0ff64dcb175a0d9c54595923d30131] <==
	I1115 10:31:34.911607       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:31:34.911821       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:31:34.911935       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:31:34.911950       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:31:34.911965       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:31:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:31:35.115009       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:31:35.115040       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:31:35.115049       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:31:35.115336       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:31:35.315286       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:31:35.315315       1 metrics.go:72] Registering metrics
	I1115 10:31:35.315365       1 controller.go:711] "Syncing nftables rules"
	I1115 10:31:45.118568       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:31:45.118634       1 main.go:301] handling current node
	I1115 10:31:55.117358       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:31:55.117396       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7334d370cd81498907be0962d22bb7ee5ba2da34d53391c27e855612ff120ea2] <==
	I1115 10:31:15.738504       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:31:15.738510       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:31:15.741592       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 10:31:15.742310       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1115 10:31:15.742401       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1115 10:31:15.752336       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:31:15.752443       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 10:31:15.788109       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1115 10:31:15.788318       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1115 10:31:15.792408       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:31:16.439379       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:31:16.444062       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:31:16.444167       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:31:17.024442       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:31:17.071092       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:31:17.194464       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:31:17.201251       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1115 10:31:17.202342       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 10:31:17.207032       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:31:17.658058       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 10:31:18.971889       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 10:31:19.014790       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:31:19.038894       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1115 10:31:31.171964       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1115 10:31:31.427895       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [31043fdf820840324b7843d6d8d43ace472a4af4ee68c2378f1e65c92f6964fe] <==
	I1115 10:31:30.642881       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-448285" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1115 10:31:30.642966       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-448285" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1115 10:31:30.672422       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:31:31.086618       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:31:31.110259       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:31:31.110407       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 10:31:31.177888       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1115 10:31:31.489203       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4sxqn"
	I1115 10:31:31.512586       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5pzbj"
	I1115 10:31:31.623774       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fc9b4"
	I1115 10:31:31.752488       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6rz72"
	I1115 10:31:31.797924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="621.101501ms"
	I1115 10:31:31.816046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.073536ms"
	I1115 10:31:31.817234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.003µs"
	I1115 10:31:31.817346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.926µs"
	I1115 10:31:32.857572       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1115 10:31:32.892821       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-fc9b4"
	I1115 10:31:32.911738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.20338ms"
	I1115 10:31:32.933246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.45974ms"
	I1115 10:31:32.933345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.176µs"
	I1115 10:31:45.417308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.511µs"
	I1115 10:31:45.436350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.035µs"
	I1115 10:31:45.619116       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1115 10:31:46.292004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.814761ms"
	I1115 10:31:46.292100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.529µs"
	
	
	==> kube-proxy [9b3a9ecac22e81414e55372bd61c02e11d852ded47df6a077e6afcd45a334007] <==
	I1115 10:31:32.030124       1 server_others.go:69] "Using iptables proxy"
	I1115 10:31:32.049909       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1115 10:31:32.161903       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:31:32.181553       1 server_others.go:152] "Using iptables Proxier"
	I1115 10:31:32.181592       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 10:31:32.181641       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 10:31:32.181667       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 10:31:32.181874       1 server.go:846] "Version info" version="v1.28.0"
	I1115 10:31:32.181884       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:31:32.183347       1 config.go:188] "Starting service config controller"
	I1115 10:31:32.183371       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 10:31:32.183387       1 config.go:97] "Starting endpoint slice config controller"
	I1115 10:31:32.183391       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 10:31:32.185029       1 config.go:315] "Starting node config controller"
	I1115 10:31:32.185039       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 10:31:32.284429       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 10:31:32.284435       1 shared_informer.go:318] Caches are synced for service config
	I1115 10:31:32.285812       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f762e6b560089e1335603f2395f8c37d06f866ecb9c93def3baab934942c5d1c] <==
	W1115 10:31:15.765563       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1115 10:31:15.765706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1115 10:31:15.766817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1115 10:31:15.766862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1115 10:31:15.766820       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1115 10:31:15.766881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1115 10:31:15.766944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1115 10:31:15.769692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1115 10:31:15.766957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1115 10:31:15.769799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1115 10:31:15.767049       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1115 10:31:15.769860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1115 10:31:16.635796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1115 10:31:16.635844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1115 10:31:16.677809       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1115 10:31:16.677947       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:31:16.678387       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1115 10:31:16.678455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1115 10:31:16.801978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1115 10:31:16.802019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1115 10:31:16.802184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1115 10:31:16.802234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1115 10:31:16.805284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1115 10:31:16.805319       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1115 10:31:19.148836       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: I1115 10:31:31.565783    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsnfp\" (UniqueName: \"kubernetes.io/projected/5a143b70-c11c-48d7-8cc3-9881bdd32a70-kube-api-access-jsnfp\") pod \"kube-proxy-5pzbj\" (UID: \"5a143b70-c11c-48d7-8cc3-9881bdd32a70\") " pod="kube-system/kube-proxy-5pzbj"
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: I1115 10:31:31.565850    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a143b70-c11c-48d7-8cc3-9881bdd32a70-xtables-lock\") pod \"kube-proxy-5pzbj\" (UID: \"5a143b70-c11c-48d7-8cc3-9881bdd32a70\") " pod="kube-system/kube-proxy-5pzbj"
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: I1115 10:31:31.565877    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a143b70-c11c-48d7-8cc3-9881bdd32a70-lib-modules\") pod \"kube-proxy-5pzbj\" (UID: \"5a143b70-c11c-48d7-8cc3-9881bdd32a70\") " pod="kube-system/kube-proxy-5pzbj"
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: I1115 10:31:31.565907    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a143b70-c11c-48d7-8cc3-9881bdd32a70-kube-proxy\") pod \"kube-proxy-5pzbj\" (UID: \"5a143b70-c11c-48d7-8cc3-9881bdd32a70\") " pod="kube-system/kube-proxy-5pzbj"
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: I1115 10:31:31.689125    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15858d1d-82c8-4f57-b984-24a45188650c-lib-modules\") pod \"kindnet-4sxqn\" (UID: \"15858d1d-82c8-4f57-b984-24a45188650c\") " pod="kube-system/kindnet-4sxqn"
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: I1115 10:31:31.689180    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/15858d1d-82c8-4f57-b984-24a45188650c-cni-cfg\") pod \"kindnet-4sxqn\" (UID: \"15858d1d-82c8-4f57-b984-24a45188650c\") " pod="kube-system/kindnet-4sxqn"
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: I1115 10:31:31.689245    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15858d1d-82c8-4f57-b984-24a45188650c-xtables-lock\") pod \"kindnet-4sxqn\" (UID: \"15858d1d-82c8-4f57-b984-24a45188650c\") " pod="kube-system/kindnet-4sxqn"
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: I1115 10:31:31.689270    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6hwz\" (UniqueName: \"kubernetes.io/projected/15858d1d-82c8-4f57-b984-24a45188650c-kube-api-access-h6hwz\") pod \"kindnet-4sxqn\" (UID: \"15858d1d-82c8-4f57-b984-24a45188650c\") " pod="kube-system/kindnet-4sxqn"
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: W1115 10:31:31.847259    1376 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/crio-97c7d6243d035e0fba81fba78f3d42dbf655624ad255d216d6c545ab641bec9e WatchSource:0}: Error finding container 97c7d6243d035e0fba81fba78f3d42dbf655624ad255d216d6c545ab641bec9e: Status 404 returned error can't find the container with id 97c7d6243d035e0fba81fba78f3d42dbf655624ad255d216d6c545ab641bec9e
	Nov 15 10:31:31 old-k8s-version-448285 kubelet[1376]: W1115 10:31:31.871457    1376 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/crio-70fbd6376f8bc0d6ac2c5c2d32d47b7ab0c9906615ac33777935c806266a9f4c WatchSource:0}: Error finding container 70fbd6376f8bc0d6ac2c5c2d32d47b7ab0c9906615ac33777935c806266a9f4c: Status 404 returned error can't find the container with id 70fbd6376f8bc0d6ac2c5c2d32d47b7ab0c9906615ac33777935c806266a9f4c
	Nov 15 10:31:32 old-k8s-version-448285 kubelet[1376]: I1115 10:31:32.238068    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5pzbj" podStartSLOduration=1.238027056 podCreationTimestamp="2025-11-15 10:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:31:32.237836826 +0000 UTC m=+13.311259570" watchObservedRunningTime="2025-11-15 10:31:32.238027056 +0000 UTC m=+13.311449800"
	Nov 15 10:31:39 old-k8s-version-448285 kubelet[1376]: I1115 10:31:39.166310    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4sxqn" podStartSLOduration=5.281163319 podCreationTimestamp="2025-11-15 10:31:31 +0000 UTC" firstStartedPulling="2025-11-15 10:31:31.891434758 +0000 UTC m=+12.964857510" lastFinishedPulling="2025-11-15 10:31:34.776539693 +0000 UTC m=+15.849962437" observedRunningTime="2025-11-15 10:31:35.251905446 +0000 UTC m=+16.325328198" watchObservedRunningTime="2025-11-15 10:31:39.166268246 +0000 UTC m=+20.239690998"
	Nov 15 10:31:45 old-k8s-version-448285 kubelet[1376]: I1115 10:31:45.386141    1376 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 15 10:31:45 old-k8s-version-448285 kubelet[1376]: I1115 10:31:45.416098    1376 topology_manager.go:215] "Topology Admit Handler" podUID="1b9cd2bc-b240-497e-8cd9-6ebb31c76230" podNamespace="kube-system" podName="coredns-5dd5756b68-6rz72"
	Nov 15 10:31:45 old-k8s-version-448285 kubelet[1376]: I1115 10:31:45.424039    1376 topology_manager.go:215] "Topology Admit Handler" podUID="ecc31eb8-2cae-47f4-9c85-8dbc48b1d546" podNamespace="kube-system" podName="storage-provisioner"
	Nov 15 10:31:45 old-k8s-version-448285 kubelet[1376]: I1115 10:31:45.494455    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ecc31eb8-2cae-47f4-9c85-8dbc48b1d546-tmp\") pod \"storage-provisioner\" (UID: \"ecc31eb8-2cae-47f4-9c85-8dbc48b1d546\") " pod="kube-system/storage-provisioner"
	Nov 15 10:31:45 old-k8s-version-448285 kubelet[1376]: I1115 10:31:45.494512    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lphz4\" (UniqueName: \"kubernetes.io/projected/1b9cd2bc-b240-497e-8cd9-6ebb31c76230-kube-api-access-lphz4\") pod \"coredns-5dd5756b68-6rz72\" (UID: \"1b9cd2bc-b240-497e-8cd9-6ebb31c76230\") " pod="kube-system/coredns-5dd5756b68-6rz72"
	Nov 15 10:31:45 old-k8s-version-448285 kubelet[1376]: I1115 10:31:45.494550    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b9cd2bc-b240-497e-8cd9-6ebb31c76230-config-volume\") pod \"coredns-5dd5756b68-6rz72\" (UID: \"1b9cd2bc-b240-497e-8cd9-6ebb31c76230\") " pod="kube-system/coredns-5dd5756b68-6rz72"
	Nov 15 10:31:45 old-k8s-version-448285 kubelet[1376]: I1115 10:31:45.494580    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6hmq\" (UniqueName: \"kubernetes.io/projected/ecc31eb8-2cae-47f4-9c85-8dbc48b1d546-kube-api-access-f6hmq\") pod \"storage-provisioner\" (UID: \"ecc31eb8-2cae-47f4-9c85-8dbc48b1d546\") " pod="kube-system/storage-provisioner"
	Nov 15 10:31:45 old-k8s-version-448285 kubelet[1376]: W1115 10:31:45.737163    1376 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/crio-3853fed03bc3966cad347f8aac215ae6677b6a3d10bf66b82449bd3bee9461d6 WatchSource:0}: Error finding container 3853fed03bc3966cad347f8aac215ae6677b6a3d10bf66b82449bd3bee9461d6: Status 404 returned error can't find the container with id 3853fed03bc3966cad347f8aac215ae6677b6a3d10bf66b82449bd3bee9461d6
	Nov 15 10:31:45 old-k8s-version-448285 kubelet[1376]: W1115 10:31:45.751743    1376 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/crio-ed61444d04936dc67520c3cc28a9d97cf97109fe144c05159ddef58f79d74ed5 WatchSource:0}: Error finding container ed61444d04936dc67520c3cc28a9d97cf97109fe144c05159ddef58f79d74ed5: Status 404 returned error can't find the container with id ed61444d04936dc67520c3cc28a9d97cf97109fe144c05159ddef58f79d74ed5
	Nov 15 10:31:46 old-k8s-version-448285 kubelet[1376]: I1115 10:31:46.280593    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.280551096 podCreationTimestamp="2025-11-15 10:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:31:46.26730091 +0000 UTC m=+27.340723654" watchObservedRunningTime="2025-11-15 10:31:46.280551096 +0000 UTC m=+27.353973840"
	Nov 15 10:31:48 old-k8s-version-448285 kubelet[1376]: I1115 10:31:48.242123    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6rz72" podStartSLOduration=17.242064048 podCreationTimestamp="2025-11-15 10:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:31:46.281497802 +0000 UTC m=+27.354920563" watchObservedRunningTime="2025-11-15 10:31:48.242064048 +0000 UTC m=+29.315486792"
	Nov 15 10:31:48 old-k8s-version-448285 kubelet[1376]: I1115 10:31:48.242381    1376 topology_manager.go:215] "Topology Admit Handler" podUID="db53b178-99fd-42b5-b5fc-37264803a8a3" podNamespace="default" podName="busybox"
	Nov 15 10:31:48 old-k8s-version-448285 kubelet[1376]: I1115 10:31:48.314949    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ttrd\" (UniqueName: \"kubernetes.io/projected/db53b178-99fd-42b5-b5fc-37264803a8a3-kube-api-access-7ttrd\") pod \"busybox\" (UID: \"db53b178-99fd-42b5-b5fc-37264803a8a3\") " pod="default/busybox"
	
	
	==> storage-provisioner [40509ea48a6fcf3e13c7b867d6f8c2d09508785dbb654361ca76e35bbfa27ff2] <==
	I1115 10:31:45.798975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:31:45.847405       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:31:45.847460       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 10:31:45.915972       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:31:45.916263       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448285_049ad255-a61b-4bc8-81c3-f96783a1d5f6!
	I1115 10:31:45.916500       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85aebc5f-7825-4f4f-9b86-fd9c4a02df82", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-448285_049ad255-a61b-4bc8-81c3-f96783a1d5f6 became leader
	I1115 10:31:46.018385       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448285_049ad255-a61b-4bc8-81c3-f96783a1d5f6!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-448285 -n old-k8s-version-448285
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-448285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-448285 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-448285 --alsologtostderr -v=1: exit status 80 (1.772102745s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-448285 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:33:16.578259  699656 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:33:16.578461  699656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:33:16.578474  699656 out.go:374] Setting ErrFile to fd 2...
	I1115 10:33:16.578480  699656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:33:16.578911  699656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:33:16.579700  699656 out.go:368] Setting JSON to false
	I1115 10:33:16.579749  699656 mustload.go:66] Loading cluster: old-k8s-version-448285
	I1115 10:33:16.580263  699656 config.go:182] Loaded profile config "old-k8s-version-448285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:33:16.580805  699656 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:33:16.597940  699656 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:33:16.601887  699656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:33:16.670249  699656 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 10:33:16.658361899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:33:16.670882  699656 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-448285 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:33:16.674811  699656 out.go:179] * Pausing node old-k8s-version-448285 ... 
	I1115 10:33:16.678110  699656 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:33:16.678508  699656 ssh_runner.go:195] Run: systemctl --version
	I1115 10:33:16.678561  699656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:33:16.695528  699656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:33:16.804410  699656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:33:16.818834  699656 pause.go:52] kubelet running: true
	I1115 10:33:16.818996  699656 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:33:17.073314  699656 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:33:17.073407  699656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:33:17.138281  699656 cri.go:89] found id: "eface29866b4e016367d33114ac59baf51eba41f487b4403f1437106ef9f4c88"
	I1115 10:33:17.138308  699656 cri.go:89] found id: "ece1d63316e0920d4b4f13ed85133a023dc0060eec2887ccb385777009fb3bc3"
	I1115 10:33:17.138314  699656 cri.go:89] found id: "4313ada3fc01b4d192454a21182be5e6fd97a5a0c54bb1af6a233af293f09eee"
	I1115 10:33:17.138318  699656 cri.go:89] found id: "0c4a76733470254348c10dd1fa258a14f22a9cff5d0003ab9b429eb6979709b0"
	I1115 10:33:17.138321  699656 cri.go:89] found id: "39b0b5e22ac1ad3090120e07a964d83e2d624c345bef9256af9b7627790dcfd2"
	I1115 10:33:17.138325  699656 cri.go:89] found id: "cdd43d36d844ca41497db349276a208a21531ba8c1b2993cabbb0595b94c98eb"
	I1115 10:33:17.138329  699656 cri.go:89] found id: "a04207eaf351ed416e1ffa4bbbeb6745b4452a7b6c7658f55d11c230a8c97f49"
	I1115 10:33:17.138332  699656 cri.go:89] found id: "d18121baff438aa2b63e503bb022ccc4a3ae97ae800bce558ce414199861e699"
	I1115 10:33:17.138336  699656 cri.go:89] found id: "1a5cd2047b2ca328ac1490d683433d4b097cc6de151a17be5169e091591dc7cf"
	I1115 10:33:17.138353  699656 cri.go:89] found id: "a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e"
	I1115 10:33:17.138360  699656 cri.go:89] found id: "84fde01cef2dc7f7c8f204de97b73b47e569756843d2f46859eb14b56140d6fd"
	I1115 10:33:17.138363  699656 cri.go:89] found id: ""
	I1115 10:33:17.138411  699656 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:33:17.155399  699656 retry.go:31] will retry after 154.232217ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:33:17Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:33:17.310819  699656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:33:17.324250  699656 pause.go:52] kubelet running: false
	I1115 10:33:17.324370  699656 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:33:17.493537  699656 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:33:17.493651  699656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:33:17.561262  699656 cri.go:89] found id: "eface29866b4e016367d33114ac59baf51eba41f487b4403f1437106ef9f4c88"
	I1115 10:33:17.561285  699656 cri.go:89] found id: "ece1d63316e0920d4b4f13ed85133a023dc0060eec2887ccb385777009fb3bc3"
	I1115 10:33:17.561291  699656 cri.go:89] found id: "4313ada3fc01b4d192454a21182be5e6fd97a5a0c54bb1af6a233af293f09eee"
	I1115 10:33:17.561295  699656 cri.go:89] found id: "0c4a76733470254348c10dd1fa258a14f22a9cff5d0003ab9b429eb6979709b0"
	I1115 10:33:17.561300  699656 cri.go:89] found id: "39b0b5e22ac1ad3090120e07a964d83e2d624c345bef9256af9b7627790dcfd2"
	I1115 10:33:17.561303  699656 cri.go:89] found id: "cdd43d36d844ca41497db349276a208a21531ba8c1b2993cabbb0595b94c98eb"
	I1115 10:33:17.561306  699656 cri.go:89] found id: "a04207eaf351ed416e1ffa4bbbeb6745b4452a7b6c7658f55d11c230a8c97f49"
	I1115 10:33:17.561309  699656 cri.go:89] found id: "d18121baff438aa2b63e503bb022ccc4a3ae97ae800bce558ce414199861e699"
	I1115 10:33:17.561312  699656 cri.go:89] found id: "1a5cd2047b2ca328ac1490d683433d4b097cc6de151a17be5169e091591dc7cf"
	I1115 10:33:17.561318  699656 cri.go:89] found id: "a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e"
	I1115 10:33:17.561321  699656 cri.go:89] found id: "84fde01cef2dc7f7c8f204de97b73b47e569756843d2f46859eb14b56140d6fd"
	I1115 10:33:17.561324  699656 cri.go:89] found id: ""
	I1115 10:33:17.561373  699656 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:33:17.573244  699656 retry.go:31] will retry after 417.572075ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:33:17Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:33:17.992017  699656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:33:18.007816  699656 pause.go:52] kubelet running: false
	I1115 10:33:18.007935  699656 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:33:18.188502  699656 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:33:18.188600  699656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:33:18.268375  699656 cri.go:89] found id: "eface29866b4e016367d33114ac59baf51eba41f487b4403f1437106ef9f4c88"
	I1115 10:33:18.268401  699656 cri.go:89] found id: "ece1d63316e0920d4b4f13ed85133a023dc0060eec2887ccb385777009fb3bc3"
	I1115 10:33:18.268407  699656 cri.go:89] found id: "4313ada3fc01b4d192454a21182be5e6fd97a5a0c54bb1af6a233af293f09eee"
	I1115 10:33:18.268411  699656 cri.go:89] found id: "0c4a76733470254348c10dd1fa258a14f22a9cff5d0003ab9b429eb6979709b0"
	I1115 10:33:18.268415  699656 cri.go:89] found id: "39b0b5e22ac1ad3090120e07a964d83e2d624c345bef9256af9b7627790dcfd2"
	I1115 10:33:18.268418  699656 cri.go:89] found id: "cdd43d36d844ca41497db349276a208a21531ba8c1b2993cabbb0595b94c98eb"
	I1115 10:33:18.268421  699656 cri.go:89] found id: "a04207eaf351ed416e1ffa4bbbeb6745b4452a7b6c7658f55d11c230a8c97f49"
	I1115 10:33:18.268443  699656 cri.go:89] found id: "d18121baff438aa2b63e503bb022ccc4a3ae97ae800bce558ce414199861e699"
	I1115 10:33:18.268453  699656 cri.go:89] found id: "1a5cd2047b2ca328ac1490d683433d4b097cc6de151a17be5169e091591dc7cf"
	I1115 10:33:18.268462  699656 cri.go:89] found id: "a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e"
	I1115 10:33:18.268466  699656 cri.go:89] found id: "84fde01cef2dc7f7c8f204de97b73b47e569756843d2f46859eb14b56140d6fd"
	I1115 10:33:18.268475  699656 cri.go:89] found id: ""
	I1115 10:33:18.268545  699656 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:33:18.283118  699656 out.go:203] 
	W1115 10:33:18.286206  699656 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:33:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:33:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:33:18.286296  699656 out.go:285] * 
	* 
	W1115 10:33:18.293505  699656 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:33:18.296526  699656 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-448285 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-448285
helpers_test.go:243: (dbg) docker inspect old-k8s-version-448285:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a",
	        "Created": "2025-11-15T10:30:52.988114549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 697572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:32:11.620488955Z",
	            "FinishedAt": "2025-11-15T10:32:10.824705848Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/hosts",
	        "LogPath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a-json.log",
	        "Name": "/old-k8s-version-448285",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-448285:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-448285",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a",
	                "LowerDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-448285",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-448285/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-448285",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-448285",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-448285",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ea577443d47296b0bde358b023c4212aaf7984b0bc34031e2840d0f3a56877a4",
	            "SandboxKey": "/var/run/docker/netns/ea577443d472",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33787"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-448285": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:72:ef:13:61:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12ca3f9e094ca466512267d8c860ff57126abdec6db67bd16d9375e8738c15d5",
	                    "EndpointID": "9b1aa47e759866b8131a4e81e3a0c987261abee8b02d48f217db8ab86305d37f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-448285",
	                        "8d49869cd1fd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-448285 -n old-k8s-version-448285
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-448285 -n old-k8s-version-448285: exit status 2 (353.655462ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-448285 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-448285 logs -n 25: (1.398496802s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-864099 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo containerd config dump                                                                                                                                                                                                  │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo crio config                                                                                                                                                                                                             │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ delete  │ -p cilium-864099                                                                                                                                                                                                                              │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-env-683299 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-683299  │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p kubernetes-upgrade-480353                                                                                                                                                                                                                  │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-845026    │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-683299                                                                                                                                                                                                                   │ force-systemd-env-683299  │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p cert-options-115480 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-115480 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-115480 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-115480                                                                                                                                                                                                                        │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-448285 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-448285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ image   │ old-k8s-version-448285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-448285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:32:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:32:11.362431  697447 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:32:11.362582  697447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:32:11.362614  697447 out.go:374] Setting ErrFile to fd 2...
	I1115 10:32:11.362627  697447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:32:11.363593  697447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:32:11.364063  697447 out.go:368] Setting JSON to false
	I1115 10:32:11.364996  697447 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18883,"bootTime":1763183849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:32:11.365068  697447 start.go:143] virtualization:  
	I1115 10:32:11.366781  697447 out.go:179] * [old-k8s-version-448285] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:32:11.367985  697447 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:32:11.368066  697447 notify.go:221] Checking for updates...
	I1115 10:32:11.370378  697447 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:32:11.371435  697447 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:32:11.372473  697447 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:32:11.373578  697447 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:32:11.374591  697447 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:32:11.376165  697447 config.go:182] Loaded profile config "old-k8s-version-448285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:32:11.377819  697447 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1115 10:32:11.378781  697447 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:32:11.416963  697447 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:32:11.417118  697447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:32:11.477332  697447 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:32:11.467952076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:32:11.477440  697447 docker.go:319] overlay module found
	I1115 10:32:11.478766  697447 out.go:179] * Using the docker driver based on existing profile
	I1115 10:32:11.479827  697447 start.go:309] selected driver: docker
	I1115 10:32:11.479843  697447 start.go:930] validating driver "docker" against &{Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:32:11.479945  697447 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:32:11.480665  697447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:32:11.542605  697447 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:32:11.533752596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:32:11.542943  697447 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:32:11.542977  697447 cni.go:84] Creating CNI manager for ""
	I1115 10:32:11.543038  697447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:32:11.543087  697447 start.go:353] cluster config:
	{Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:32:11.544461  697447 out.go:179] * Starting "old-k8s-version-448285" primary control-plane node in "old-k8s-version-448285" cluster
	I1115 10:32:11.545470  697447 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:32:11.546479  697447 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:32:11.547513  697447 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:32:11.547560  697447 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 10:32:11.547573  697447 cache.go:65] Caching tarball of preloaded images
	I1115 10:32:11.547604  697447 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:32:11.547668  697447 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:32:11.547679  697447 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 10:32:11.547784  697447 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/config.json ...
	I1115 10:32:11.567170  697447 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:32:11.567190  697447 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:32:11.567202  697447 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:32:11.567224  697447 start.go:360] acquireMachinesLock for old-k8s-version-448285: {Name:mk5fdf42c0c76187fa0952dcaa2e938d4fb739c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:32:11.567273  697447 start.go:364] duration metric: took 32.418µs to acquireMachinesLock for "old-k8s-version-448285"
	I1115 10:32:11.567292  697447 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:32:11.567298  697447 fix.go:54] fixHost starting: 
	I1115 10:32:11.567554  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:11.583749  697447 fix.go:112] recreateIfNeeded on old-k8s-version-448285: state=Stopped err=<nil>
	W1115 10:32:11.583777  697447 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:32:11.585105  697447 out.go:252] * Restarting existing docker container for "old-k8s-version-448285" ...
	I1115 10:32:11.585194  697447 cli_runner.go:164] Run: docker start old-k8s-version-448285
	I1115 10:32:11.846786  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:11.871237  697447 kic.go:430] container "old-k8s-version-448285" state is running.
	I1115 10:32:11.871625  697447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-448285
	I1115 10:32:11.896226  697447 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/config.json ...
	I1115 10:32:11.896440  697447 machine.go:94] provisionDockerMachine start ...
	I1115 10:32:11.896593  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:11.915876  697447 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:11.916218  697447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33784 <nil> <nil>}
	I1115 10:32:11.916228  697447 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:32:11.916874  697447 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55302->127.0.0.1:33784: read: connection reset by peer
	I1115 10:32:15.077327  697447 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448285
	
	I1115 10:32:15.077349  697447 ubuntu.go:182] provisioning hostname "old-k8s-version-448285"
	I1115 10:32:15.077437  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:15.096311  697447 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:15.096631  697447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33784 <nil> <nil>}
	I1115 10:32:15.096649  697447 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-448285 && echo "old-k8s-version-448285" | sudo tee /etc/hostname
	I1115 10:32:15.259159  697447 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448285
	
	I1115 10:32:15.259236  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:15.276483  697447 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:15.276900  697447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33784 <nil> <nil>}
	I1115 10:32:15.276921  697447 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-448285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-448285/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-448285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:32:15.426369  697447 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:32:15.426434  697447 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:32:15.426470  697447 ubuntu.go:190] setting up certificates
	I1115 10:32:15.426499  697447 provision.go:84] configureAuth start
	I1115 10:32:15.426602  697447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-448285
	I1115 10:32:15.444146  697447 provision.go:143] copyHostCerts
	I1115 10:32:15.444211  697447 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:32:15.444234  697447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:32:15.444311  697447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:32:15.444565  697447 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:32:15.444574  697447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:32:15.444611  697447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:32:15.444720  697447 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:32:15.444725  697447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:32:15.444751  697447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:32:15.444799  697447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-448285 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-448285]
	I1115 10:32:15.784566  697447 provision.go:177] copyRemoteCerts
	I1115 10:32:15.784639  697447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:32:15.784683  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:15.803346  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:15.909992  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:32:15.928338  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 10:32:15.947363  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:32:15.966209  697447 provision.go:87] duration metric: took 539.661617ms to configureAuth
	I1115 10:32:15.966238  697447 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:32:15.966429  697447 config.go:182] Loaded profile config "old-k8s-version-448285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:32:15.966547  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:15.984795  697447 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:15.985188  697447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33784 <nil> <nil>}
	I1115 10:32:15.985207  697447 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:32:16.295918  697447 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:32:16.296010  697447 machine.go:97] duration metric: took 4.399560663s to provisionDockerMachine
	I1115 10:32:16.296045  697447 start.go:293] postStartSetup for "old-k8s-version-448285" (driver="docker")
	I1115 10:32:16.296094  697447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:32:16.296240  697447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:32:16.296331  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:16.319278  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:16.425486  697447 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:32:16.428770  697447 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:32:16.428800  697447 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:32:16.428812  697447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:32:16.428867  697447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:32:16.428947  697447 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:32:16.429057  697447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:32:16.436538  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:32:16.454414  697447 start.go:296] duration metric: took 158.319914ms for postStartSetup
	I1115 10:32:16.454506  697447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:32:16.454550  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:16.470472  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:16.570216  697447 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:32:16.576614  697447 fix.go:56] duration metric: took 5.009308893s for fixHost
	I1115 10:32:16.576640  697447 start.go:83] releasing machines lock for "old-k8s-version-448285", held for 5.009359649s
	I1115 10:32:16.576714  697447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-448285
	I1115 10:32:16.593791  697447 ssh_runner.go:195] Run: cat /version.json
	I1115 10:32:16.593856  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:16.594130  697447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:32:16.594211  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:16.625047  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:16.633732  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:16.816371  697447 ssh_runner.go:195] Run: systemctl --version
	I1115 10:32:16.822882  697447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:32:16.857564  697447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:32:16.862277  697447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:32:16.862341  697447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:32:16.869548  697447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:32:16.869568  697447 start.go:496] detecting cgroup driver to use...
	I1115 10:32:16.869648  697447 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:32:16.869706  697447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:32:16.884740  697447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:32:16.898636  697447 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:32:16.898696  697447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:32:16.913440  697447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:32:16.926108  697447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:32:17.049825  697447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:32:17.177782  697447 docker.go:234] disabling docker service ...
	I1115 10:32:17.177859  697447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:32:17.195315  697447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:32:17.208878  697447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:32:17.324116  697447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:32:17.450411  697447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:32:17.465979  697447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:32:17.481364  697447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1115 10:32:17.481460  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.490482  697447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:32:17.490572  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.499841  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.508818  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.517442  697447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:32:17.525332  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.534843  697447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.543138  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.552520  697447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:32:17.559678  697447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:32:17.566651  697447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:17.681697  697447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:32:17.815736  697447 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:32:17.815807  697447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:32:17.819583  697447 start.go:564] Will wait 60s for crictl version
	I1115 10:32:17.819712  697447 ssh_runner.go:195] Run: which crictl
	I1115 10:32:17.823390  697447 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:32:17.849095  697447 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:32:17.849274  697447 ssh_runner.go:195] Run: crio --version
	I1115 10:32:17.877704  697447 ssh_runner.go:195] Run: crio --version
	I1115 10:32:17.916588  697447 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1115 10:32:17.919380  697447 cli_runner.go:164] Run: docker network inspect old-k8s-version-448285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:32:17.934736  697447 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:32:17.938432  697447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:32:17.947985  697447 kubeadm.go:884] updating cluster {Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:32:17.948113  697447 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:32:17.948170  697447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:32:17.981529  697447 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:32:17.981553  697447 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:32:17.981645  697447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:32:18.009169  697447 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:32:18.009201  697447 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:32:18.009210  697447 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1115 10:32:18.009308  697447 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-448285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:32:18.009410  697447 ssh_runner.go:195] Run: crio config
	I1115 10:32:18.081434  697447 cni.go:84] Creating CNI manager for ""
	I1115 10:32:18.081461  697447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:32:18.081508  697447 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:32:18.081539  697447 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-448285 NodeName:old-k8s-version-448285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:32:18.081760  697447 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-448285"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:32:18.081862  697447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1115 10:32:18.090345  697447 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:32:18.090466  697447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:32:18.098436  697447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1115 10:32:18.111607  697447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:32:18.124035  697447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1115 10:32:18.137169  697447 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:32:18.140745  697447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:32:18.150290  697447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:18.273724  697447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:32:18.291725  697447 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285 for IP: 192.168.85.2
	I1115 10:32:18.291747  697447 certs.go:195] generating shared ca certs ...
	I1115 10:32:18.291762  697447 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:18.291980  697447 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:32:18.292058  697447 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:32:18.292080  697447 certs.go:257] generating profile certs ...
	I1115 10:32:18.292198  697447 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.key
	I1115 10:32:18.292295  697447 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key.28437dd2
	I1115 10:32:18.292376  697447 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.key
	I1115 10:32:18.292518  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:32:18.292569  697447 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:32:18.292583  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:32:18.292619  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:32:18.292685  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:32:18.292714  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:32:18.292790  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:32:18.293387  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:32:18.313137  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:32:18.330831  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:32:18.352976  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:32:18.369635  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 10:32:18.390179  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:32:18.407650  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:32:18.430857  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:32:18.455089  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:32:18.475322  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:32:18.500133  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:32:18.528431  697447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:32:18.541908  697447 ssh_runner.go:195] Run: openssl version
	I1115 10:32:18.550461  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:32:18.560706  697447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:32:18.565423  697447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:32:18.565491  697447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:32:18.613300  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:32:18.622238  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:32:18.643445  697447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:18.647885  697447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:18.647951  697447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:18.690761  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:32:18.699174  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:32:18.707433  697447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:32:18.711445  697447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:32:18.711535  697447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:32:18.753417  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:32:18.761354  697447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:32:18.765638  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:32:18.806436  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:32:18.848855  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:32:18.890414  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:32:18.931298  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:32:18.980932  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:32:19.039105  697447 kubeadm.go:401] StartCluster: {Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:32:19.039232  697447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:32:19.039341  697447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:32:19.098131  697447 cri.go:89] found id: "a04207eaf351ed416e1ffa4bbbeb6745b4452a7b6c7658f55d11c230a8c97f49"
	I1115 10:32:19.098155  697447 cri.go:89] found id: "1a5cd2047b2ca328ac1490d683433d4b097cc6de151a17be5169e091591dc7cf"
	I1115 10:32:19.098159  697447 cri.go:89] found id: ""
	I1115 10:32:19.098212  697447 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:32:19.132629  697447 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:32:19Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:32:19.132789  697447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:32:19.151922  697447 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:32:19.151986  697447 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:32:19.152079  697447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:32:19.175009  697447 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:32:19.175648  697447 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-448285" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:32:19.175988  697447 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-514793/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-448285" cluster setting kubeconfig missing "old-k8s-version-448285" context setting]
	I1115 10:32:19.176594  697447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:19.178391  697447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:32:19.197615  697447 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:32:19.197696  697447 kubeadm.go:602] duration metric: took 45.688385ms to restartPrimaryControlPlane
	I1115 10:32:19.197720  697447 kubeadm.go:403] duration metric: took 158.626169ms to StartCluster
	I1115 10:32:19.197762  697447 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:19.197854  697447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:32:19.198835  697447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:19.199108  697447 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:32:19.199539  697447 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:32:19.199613  697447 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-448285"
	I1115 10:32:19.199630  697447 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-448285"
	W1115 10:32:19.199637  697447 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:32:19.199660  697447 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:32:19.200266  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:19.200534  697447 config.go:182] Loaded profile config "old-k8s-version-448285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:32:19.200627  697447 addons.go:70] Setting dashboard=true in profile "old-k8s-version-448285"
	I1115 10:32:19.200657  697447 addons.go:239] Setting addon dashboard=true in "old-k8s-version-448285"
	W1115 10:32:19.200695  697447 addons.go:248] addon dashboard should already be in state true
	I1115 10:32:19.200735  697447 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:32:19.201203  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:19.203946  697447 out.go:179] * Verifying Kubernetes components...
	I1115 10:32:19.204197  697447 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-448285"
	I1115 10:32:19.204233  697447 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-448285"
	I1115 10:32:19.204578  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:19.208987  697447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:19.249688  697447 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:32:19.253704  697447 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:32:19.256637  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:32:19.256662  697447 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:32:19.256732  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:19.272418  697447 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:32:19.273531  697447 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-448285"
	W1115 10:32:19.273549  697447 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:32:19.273574  697447 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:32:19.274230  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:19.277680  697447 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:32:19.277702  697447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:32:19.277762  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:19.313897  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:19.325553  697447 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:32:19.325574  697447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:32:19.325700  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:19.332688  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:19.358224  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:19.563343  697447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:32:19.583309  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:32:19.583382  697447 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:32:19.603303  697447 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-448285" to be "Ready" ...
	I1115 10:32:19.624835  697447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:32:19.627435  697447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:32:19.635553  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:32:19.635628  697447 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:32:19.672211  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:32:19.672289  697447 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:32:19.737531  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:32:19.737625  697447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:32:19.803217  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:32:19.803291  697447 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:32:19.896773  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:32:19.896801  697447 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:32:19.929428  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:32:19.929455  697447 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:32:19.951355  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:32:19.951380  697447 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:32:19.976058  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:32:19.976083  697447 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:32:19.995725  697447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:32:23.936309  697447 node_ready.go:49] node "old-k8s-version-448285" is "Ready"
	I1115 10:32:23.936340  697447 node_ready.go:38] duration metric: took 4.332950026s for node "old-k8s-version-448285" to be "Ready" ...
	I1115 10:32:23.936353  697447 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:32:23.936412  697447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:32:25.033498  697447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.408555009s)
	I1115 10:32:25.632591  697447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.005070186s)
	I1115 10:32:26.225139  697447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.229372615s)
	I1115 10:32:26.225189  697447 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.288756726s)
	I1115 10:32:26.225381  697447 api_server.go:72] duration metric: took 7.02620682s to wait for apiserver process to appear ...
	I1115 10:32:26.225391  697447 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:32:26.225409  697447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:32:26.228381  697447 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-448285 addons enable metrics-server
	
	I1115 10:32:26.231369  697447 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:32:26.234278  697447 addons.go:515] duration metric: took 7.034728496s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:32:26.235500  697447 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 10:32:26.236941  697447 api_server.go:141] control plane version: v1.28.0
	I1115 10:32:26.236965  697447 api_server.go:131] duration metric: took 11.567262ms to wait for apiserver health ...
	I1115 10:32:26.236977  697447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:32:26.240570  697447 system_pods.go:59] 8 kube-system pods found
	I1115 10:32:26.240609  697447 system_pods.go:61] "coredns-5dd5756b68-6rz72" [1b9cd2bc-b240-497e-8cd9-6ebb31c76230] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:32:26.240620  697447 system_pods.go:61] "etcd-old-k8s-version-448285" [e0971773-43bd-4354-9a09-4b3423d890e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:32:26.240627  697447 system_pods.go:61] "kindnet-4sxqn" [15858d1d-82c8-4f57-b984-24a45188650c] Running
	I1115 10:32:26.240634  697447 system_pods.go:61] "kube-apiserver-old-k8s-version-448285" [79da2b0a-4965-4f39-b9a1-435376166c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:32:26.240645  697447 system_pods.go:61] "kube-controller-manager-old-k8s-version-448285" [676943af-e821-4997-9130-bf6cce8685ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:32:26.240653  697447 system_pods.go:61] "kube-proxy-5pzbj" [5a143b70-c11c-48d7-8cc3-9881bdd32a70] Running
	I1115 10:32:26.240660  697447 system_pods.go:61] "kube-scheduler-old-k8s-version-448285" [b045298c-16ca-418e-8622-fe1ab709a966] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:32:26.240665  697447 system_pods.go:61] "storage-provisioner" [ecc31eb8-2cae-47f4-9c85-8dbc48b1d546] Running
	I1115 10:32:26.240674  697447 system_pods.go:74] duration metric: took 3.668534ms to wait for pod list to return data ...
	I1115 10:32:26.240684  697447 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:32:26.242949  697447 default_sa.go:45] found service account: "default"
	I1115 10:32:26.242974  697447 default_sa.go:55] duration metric: took 2.28477ms for default service account to be created ...
	I1115 10:32:26.242984  697447 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:32:26.246108  697447 system_pods.go:86] 8 kube-system pods found
	I1115 10:32:26.246137  697447 system_pods.go:89] "coredns-5dd5756b68-6rz72" [1b9cd2bc-b240-497e-8cd9-6ebb31c76230] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:32:26.246145  697447 system_pods.go:89] "etcd-old-k8s-version-448285" [e0971773-43bd-4354-9a09-4b3423d890e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:32:26.246151  697447 system_pods.go:89] "kindnet-4sxqn" [15858d1d-82c8-4f57-b984-24a45188650c] Running
	I1115 10:32:26.246158  697447 system_pods.go:89] "kube-apiserver-old-k8s-version-448285" [79da2b0a-4965-4f39-b9a1-435376166c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:32:26.246168  697447 system_pods.go:89] "kube-controller-manager-old-k8s-version-448285" [676943af-e821-4997-9130-bf6cce8685ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:32:26.246179  697447 system_pods.go:89] "kube-proxy-5pzbj" [5a143b70-c11c-48d7-8cc3-9881bdd32a70] Running
	I1115 10:32:26.246186  697447 system_pods.go:89] "kube-scheduler-old-k8s-version-448285" [b045298c-16ca-418e-8622-fe1ab709a966] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:32:26.246190  697447 system_pods.go:89] "storage-provisioner" [ecc31eb8-2cae-47f4-9c85-8dbc48b1d546] Running
	I1115 10:32:26.246200  697447 system_pods.go:126] duration metric: took 3.211144ms to wait for k8s-apps to be running ...
	I1115 10:32:26.246207  697447 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:32:26.246268  697447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:32:26.260808  697447 system_svc.go:56] duration metric: took 14.592138ms WaitForService to wait for kubelet
	I1115 10:32:26.260874  697447 kubeadm.go:587] duration metric: took 7.061707992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:32:26.260910  697447 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:32:26.263757  697447 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:32:26.263796  697447 node_conditions.go:123] node cpu capacity is 2
	I1115 10:32:26.263809  697447 node_conditions.go:105] duration metric: took 2.879305ms to run NodePressure ...
	I1115 10:32:26.263821  697447 start.go:242] waiting for startup goroutines ...
	I1115 10:32:26.263828  697447 start.go:247] waiting for cluster config update ...
	I1115 10:32:26.263840  697447 start.go:256] writing updated cluster config ...
	I1115 10:32:26.264155  697447 ssh_runner.go:195] Run: rm -f paused
	I1115 10:32:26.268218  697447 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:32:26.272516  697447 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6rz72" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:32:28.277989  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:30.279852  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:32.778841  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:35.279078  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:37.280183  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:39.781765  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:42.282493  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:44.779117  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:47.279418  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:49.778858  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:51.778960  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:54.278367  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:56.278760  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:58.778325  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:33:00.778463  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	I1115 10:33:02.278872  697447 pod_ready.go:94] pod "coredns-5dd5756b68-6rz72" is "Ready"
	I1115 10:33:02.278904  697447 pod_ready.go:86] duration metric: took 36.00636396s for pod "coredns-5dd5756b68-6rz72" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.282056  697447 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.287189  697447 pod_ready.go:94] pod "etcd-old-k8s-version-448285" is "Ready"
	I1115 10:33:02.287259  697447 pod_ready.go:86] duration metric: took 5.181083ms for pod "etcd-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.290384  697447 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.295494  697447 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-448285" is "Ready"
	I1115 10:33:02.295523  697447 pod_ready.go:86] duration metric: took 5.115206ms for pod "kube-apiserver-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.298698  697447 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.476397  697447 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-448285" is "Ready"
	I1115 10:33:02.476476  697447 pod_ready.go:86] duration metric: took 177.751064ms for pod "kube-controller-manager-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.677273  697447 pod_ready.go:83] waiting for pod "kube-proxy-5pzbj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:03.081524  697447 pod_ready.go:94] pod "kube-proxy-5pzbj" is "Ready"
	I1115 10:33:03.081626  697447 pod_ready.go:86] duration metric: took 404.325062ms for pod "kube-proxy-5pzbj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:03.276232  697447 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:03.676818  697447 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-448285" is "Ready"
	I1115 10:33:03.676845  697447 pod_ready.go:86] duration metric: took 400.585892ms for pod "kube-scheduler-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:03.676859  697447 pod_ready.go:40] duration metric: took 37.408600703s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:33:03.732211  697447 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1115 10:33:03.735402  697447 out.go:203] 
	W1115 10:33:03.738391  697447 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 10:33:03.741246  697447 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:33:03.744289  697447 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-448285" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.528615121Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=78a43f5e-c520-49ce-b34b-1d318d64d9f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.529482986Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6abd7d5e-7809-4a7c-8929-c4330e41a51e name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.530895878Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27/dashboard-metrics-scraper" id=ff970252-3e9f-405c-8326-02261f54086e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.531027181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.542284035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.542817911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.559862871Z" level=info msg="Created container a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27/dashboard-metrics-scraper" id=ff970252-3e9f-405c-8326-02261f54086e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.560519296Z" level=info msg="Starting container: a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e" id=570f5860-92dd-41d0-afa6-dfdea1e5863c name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.56247376Z" level=info msg="Started container" PID=1632 containerID=a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27/dashboard-metrics-scraper id=570f5860-92dd-41d0-afa6-dfdea1e5863c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ee3fdfbc25acb09784c25e627d4d58ba90263e0a820f90d0b5624a456043b19
	Nov 15 10:32:58 old-k8s-version-448285 conmon[1630]: conmon a4a632917a903434c332 <ninfo>: container 1632 exited with status 1
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.696866453Z" level=info msg="Removing container: 38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215" id=87c584a5-7b8f-4991-88c5-0af67a532ac2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.709203654Z" level=info msg="Error loading conmon cgroup of container 38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215: cgroup deleted" id=87c584a5-7b8f-4991-88c5-0af67a532ac2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.712286858Z" level=info msg="Removed container 38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27/dashboard-metrics-scraper" id=87c584a5-7b8f-4991-88c5-0af67a532ac2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.337854332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.34413849Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.344174485Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.344196671Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.347398092Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.347430961Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.347456494Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.350341214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.350370867Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.350395531Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.35353544Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.353563656Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a4a632917a903       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   9ee3fdfbc25ac       dashboard-metrics-scraper-5f989dc9cf-npv27       kubernetes-dashboard
	eface29866b4e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   cd657cfaaa433       storage-provisioner                              kube-system
	84fde01cef2dc       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   73d10af3b5e94       kubernetes-dashboard-8694d4445c-l44x5            kubernetes-dashboard
	bdf2a38455c8b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   b0eddf057d964       busybox                                          default
	ece1d63316e09       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   14af18612e897       coredns-5dd5756b68-6rz72                         kube-system
	4313ada3fc01b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   87460fd466a35       kube-proxy-5pzbj                                 kube-system
	0c4a767334702       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   4e311c55753a3       kindnet-4sxqn                                    kube-system
	39b0b5e22ac1a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   cd657cfaaa433       storage-provisioner                              kube-system
	cdd43d36d844c       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   421c314e1c37c       kube-controller-manager-old-k8s-version-448285   kube-system
	a04207eaf351e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   b58c32f5e9e31       kube-apiserver-old-k8s-version-448285            kube-system
	d18121baff438       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   1544d8b851a9f       kube-scheduler-old-k8s-version-448285            kube-system
	1a5cd2047b2ca       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   d3a78f3914063       etcd-old-k8s-version-448285                      kube-system
	
	
	==> coredns [ece1d63316e0920d4b4f13ed85133a023dc0060eec2887ccb385777009fb3bc3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51068 - 5718 "HINFO IN 6291190729975920619.953597467972446564. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022733665s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-448285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-448285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=old-k8s-version-448285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_31_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:31:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-448285
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:33:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:32:54 +0000   Sat, 15 Nov 2025 10:31:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:32:54 +0000   Sat, 15 Nov 2025 10:31:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:32:54 +0000   Sat, 15 Nov 2025 10:31:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:32:54 +0000   Sat, 15 Nov 2025 10:31:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-448285
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                992b5604-a676-4f5a-a947-58bb000cddf9
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-6rz72                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-448285                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-4sxqn                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-448285             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-448285    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-5pzbj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-448285             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-npv27        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-l44x5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 2m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-448285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-448285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-448285 event: Registered Node old-k8s-version-448285 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-448285 status is now: NodeReady
	  Normal  Starting                 61s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node old-k8s-version-448285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                  node-controller  Node old-k8s-version-448285 event: Registered Node old-k8s-version-448285 in Controller
	
	
	==> dmesg <==
	[Nov15 10:05] overlayfs: idmapped layers are currently not supported
	[Nov15 10:09] overlayfs: idmapped layers are currently not supported
	[Nov15 10:10] overlayfs: idmapped layers are currently not supported
	[Nov15 10:11] overlayfs: idmapped layers are currently not supported
	[Nov15 10:12] overlayfs: idmapped layers are currently not supported
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1a5cd2047b2ca328ac1490d683433d4b097cc6de151a17be5169e091591dc7cf] <==
	{"level":"info","ts":"2025-11-15T10:32:19.335669Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T10:32:19.335827Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T10:32:19.337986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-15T10:32:19.339605Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-15T10:32:19.349754Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:32:19.353241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:32:19.397751Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-15T10:32:19.397923Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T10:32:19.397932Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T10:32:19.399838Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-15T10:32:19.399868Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T10:32:20.524664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-15T10:32:20.5248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-15T10:32:20.524853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T10:32:20.524907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-15T10:32:20.524942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T10:32:20.524992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-15T10:32:20.525037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T10:32:20.52764Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-448285 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T10:32:20.527879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:32:20.528074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T10:32:20.528167Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T10:32:20.528261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:32:20.529009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T10:32:20.529201Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 10:33:19 up  5:15,  0 user,  load average: 1.80, 3.00, 2.68
	Linux old-k8s-version-448285 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c4a76733470254348c10dd1fa258a14f22a9cff5d0003ab9b429eb6979709b0] <==
	I1115 10:32:25.112324       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:32:25.112582       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:32:25.112698       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:32:25.112709       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:32:25.112719       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:32:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:32:25.344134       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:32:25.344158       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:32:25.344166       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:32:25.344292       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:32:55.335670       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:32:55.342748       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:32:55.342905       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:32:55.344181       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:32:56.644680       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:32:56.644729       1 metrics.go:72] Registering metrics
	I1115 10:32:56.644790       1 controller.go:711] "Syncing nftables rules"
	I1115 10:33:05.337492       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:33:05.337564       1 main.go:301] handling current node
	I1115 10:33:15.337316       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:33:15.337429       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a04207eaf351ed416e1ffa4bbbeb6745b4452a7b6c7658f55d11c230a8c97f49] <==
	I1115 10:32:23.720696       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1115 10:32:24.012934       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 10:32:24.012988       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1115 10:32:24.014195       1 shared_informer.go:318] Caches are synced for configmaps
	I1115 10:32:24.044396       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:32:24.058806       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1115 10:32:24.059721       1 aggregator.go:166] initial CRD sync complete...
	I1115 10:32:24.059746       1 autoregister_controller.go:141] Starting autoregister controller
	I1115 10:32:24.059753       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:32:24.059760       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:32:24.061847       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1115 10:32:24.061872       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1115 10:32:24.079308       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1115 10:32:24.101767       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:32:24.726243       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:32:26.040429       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 10:32:26.090349       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 10:32:26.115243       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:32:26.127411       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:32:26.137451       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 10:32:26.199545       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.122.85"}
	I1115 10:32:26.217656       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.116.220"}
	I1115 10:32:35.654330       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1115 10:32:35.912088       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 10:32:35.961526       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cdd43d36d844ca41497db349276a208a21531ba8c1b2993cabbb0595b94c98eb] <==
	I1115 10:32:35.756902       1 shared_informer.go:318] Caches are synced for endpoint
	I1115 10:32:35.761931       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:32:35.765676       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:32:35.767735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="93.093µs"
	I1115 10:32:35.769507       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1115 10:32:35.771411       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1115 10:32:35.790546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.709223ms"
	I1115 10:32:35.790654       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1115 10:32:35.791471       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1115 10:32:35.792273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="600.73µs"
	I1115 10:32:35.796878       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1115 10:32:35.804813       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1115 10:32:35.847143       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1115 10:32:36.182682       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:32:36.182714       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 10:32:36.236733       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:32:40.647397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.984µs"
	I1115 10:32:41.655685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.426µs"
	I1115 10:32:42.663011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.213µs"
	I1115 10:32:45.678644       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.804979ms"
	I1115 10:32:45.680017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.574µs"
	I1115 10:32:58.708946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.905µs"
	I1115 10:33:02.064085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.336935ms"
	I1115 10:33:02.064217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.095µs"
	I1115 10:33:06.027407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.318µs"
	
	
	==> kube-proxy [4313ada3fc01b4d192454a21182be5e6fd97a5a0c54bb1af6a233af293f09eee] <==
	I1115 10:32:25.438612       1 server_others.go:69] "Using iptables proxy"
	I1115 10:32:25.466225       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1115 10:32:25.487396       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:32:25.489569       1 server_others.go:152] "Using iptables Proxier"
	I1115 10:32:25.489908       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 10:32:25.489921       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 10:32:25.489955       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 10:32:25.490194       1 server.go:846] "Version info" version="v1.28.0"
	I1115 10:32:25.490221       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:32:25.491648       1 config.go:188] "Starting service config controller"
	I1115 10:32:25.492760       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 10:32:25.492850       1 config.go:97] "Starting endpoint slice config controller"
	I1115 10:32:25.492881       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 10:32:25.495617       1 config.go:315] "Starting node config controller"
	I1115 10:32:25.496550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 10:32:25.595244       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 10:32:25.595306       1 shared_informer.go:318] Caches are synced for service config
	I1115 10:32:25.596939       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d18121baff438aa2b63e503bb022ccc4a3ae97ae800bce558ce414199861e699] <==
	I1115 10:32:22.114814       1 serving.go:348] Generated self-signed cert in-memory
	W1115 10:32:23.842931       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:32:23.842957       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:32:23.842965       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:32:23.842972       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:32:23.963348       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1115 10:32:23.963466       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:32:23.965458       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1115 10:32:23.968151       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:32:23.968236       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 10:32:23.968280       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1115 10:32:24.069823       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 10:32:35 old-k8s-version-448285 kubelet[778]: I1115 10:32:35.820299     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6rkb\" (UniqueName: \"kubernetes.io/projected/99aa18b9-d4b8-43be-9700-bdbd82aaf4fd-kube-api-access-p6rkb\") pod \"dashboard-metrics-scraper-5f989dc9cf-npv27\" (UID: \"99aa18b9-d4b8-43be-9700-bdbd82aaf4fd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27"
	Nov 15 10:32:35 old-k8s-version-448285 kubelet[778]: I1115 10:32:35.921571     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjlm\" (UniqueName: \"kubernetes.io/projected/eb4f1b09-dbba-4d40-a2fa-e31fbc421449-kube-api-access-fcjlm\") pod \"kubernetes-dashboard-8694d4445c-l44x5\" (UID: \"eb4f1b09-dbba-4d40-a2fa-e31fbc421449\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l44x5"
	Nov 15 10:32:35 old-k8s-version-448285 kubelet[778]: I1115 10:32:35.921834     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eb4f1b09-dbba-4d40-a2fa-e31fbc421449-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-l44x5\" (UID: \"eb4f1b09-dbba-4d40-a2fa-e31fbc421449\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l44x5"
	Nov 15 10:32:36 old-k8s-version-448285 kubelet[778]: W1115 10:32:36.042039     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/crio-9ee3fdfbc25acb09784c25e627d4d58ba90263e0a820f90d0b5624a456043b19 WatchSource:0}: Error finding container 9ee3fdfbc25acb09784c25e627d4d58ba90263e0a820f90d0b5624a456043b19: Status 404 returned error can't find the container with id 9ee3fdfbc25acb09784c25e627d4d58ba90263e0a820f90d0b5624a456043b19
	Nov 15 10:32:36 old-k8s-version-448285 kubelet[778]: W1115 10:32:36.354080     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/crio-73d10af3b5e9463187b7fa34fc52a8f097e301718e2e768dfdd84bc2299f321d WatchSource:0}: Error finding container 73d10af3b5e9463187b7fa34fc52a8f097e301718e2e768dfdd84bc2299f321d: Status 404 returned error can't find the container with id 73d10af3b5e9463187b7fa34fc52a8f097e301718e2e768dfdd84bc2299f321d
	Nov 15 10:32:40 old-k8s-version-448285 kubelet[778]: I1115 10:32:40.631153     778 scope.go:117] "RemoveContainer" containerID="e22560aa487e7a82bb38fa8c0ad73c6d5e9da25baf6742e186573f180e28cffb"
	Nov 15 10:32:41 old-k8s-version-448285 kubelet[778]: I1115 10:32:41.637849     778 scope.go:117] "RemoveContainer" containerID="e22560aa487e7a82bb38fa8c0ad73c6d5e9da25baf6742e186573f180e28cffb"
	Nov 15 10:32:41 old-k8s-version-448285 kubelet[778]: I1115 10:32:41.638157     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:41 old-k8s-version-448285 kubelet[778]: E1115 10:32:41.638422     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:32:42 old-k8s-version-448285 kubelet[778]: I1115 10:32:42.641894     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:42 old-k8s-version-448285 kubelet[778]: E1115 10:32:42.642758     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:32:46 old-k8s-version-448285 kubelet[778]: I1115 10:32:46.010440     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:46 old-k8s-version-448285 kubelet[778]: E1115 10:32:46.010771     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:32:55 old-k8s-version-448285 kubelet[778]: I1115 10:32:55.674851     778 scope.go:117] "RemoveContainer" containerID="39b0b5e22ac1ad3090120e07a964d83e2d624c345bef9256af9b7627790dcfd2"
	Nov 15 10:32:55 old-k8s-version-448285 kubelet[778]: I1115 10:32:55.708736     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l44x5" podStartSLOduration=11.785265772 podCreationTimestamp="2025-11-15 10:32:35 +0000 UTC" firstStartedPulling="2025-11-15 10:32:36.35859146 +0000 UTC m=+18.064630084" lastFinishedPulling="2025-11-15 10:32:45.28130617 +0000 UTC m=+26.987344794" observedRunningTime="2025-11-15 10:32:45.668556275 +0000 UTC m=+27.374594899" watchObservedRunningTime="2025-11-15 10:32:55.707980482 +0000 UTC m=+37.414019105"
	Nov 15 10:32:58 old-k8s-version-448285 kubelet[778]: I1115 10:32:58.528056     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:58 old-k8s-version-448285 kubelet[778]: I1115 10:32:58.685682     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:58 old-k8s-version-448285 kubelet[778]: I1115 10:32:58.685937     778 scope.go:117] "RemoveContainer" containerID="a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e"
	Nov 15 10:32:58 old-k8s-version-448285 kubelet[778]: E1115 10:32:58.686207     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:33:06 old-k8s-version-448285 kubelet[778]: I1115 10:33:06.010804     778 scope.go:117] "RemoveContainer" containerID="a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e"
	Nov 15 10:33:06 old-k8s-version-448285 kubelet[778]: E1115 10:33:06.011128     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:33:17 old-k8s-version-448285 kubelet[778]: I1115 10:33:17.006321     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 15 10:33:17 old-k8s-version-448285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:33:17 old-k8s-version-448285 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:33:17 old-k8s-version-448285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [84fde01cef2dc7f7c8f204de97b73b47e569756843d2f46859eb14b56140d6fd] <==
	2025/11/15 10:32:45 Using namespace: kubernetes-dashboard
	2025/11/15 10:32:45 Using in-cluster config to connect to apiserver
	2025/11/15 10:32:45 Using secret token for csrf signing
	2025/11/15 10:32:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:32:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:32:45 Successful initial request to the apiserver, version: v1.28.0
	2025/11/15 10:32:45 Generating JWE encryption key
	2025/11/15 10:32:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:32:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:32:45 Initializing JWE encryption key from synchronized object
	2025/11/15 10:32:45 Creating in-cluster Sidecar client
	2025/11/15 10:32:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:32:45 Serving insecurely on HTTP port: 9090
	2025/11/15 10:33:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:32:45 Starting overwatch
	
	
	==> storage-provisioner [39b0b5e22ac1ad3090120e07a964d83e2d624c345bef9256af9b7627790dcfd2] <==
	I1115 10:32:25.261485       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:32:55.263283       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [eface29866b4e016367d33114ac59baf51eba41f487b4403f1437106ef9f4c88] <==
	I1115 10:32:55.722466       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:32:55.738406       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:32:55.738463       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 10:33:13.137953       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:33:13.138178       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448285_c35e07f9-ceb2-483a-b66c-ba2e8829245d!
	I1115 10:33:13.139487       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85aebc5f-7825-4f4f-9b86-fd9c4a02df82", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-448285_c35e07f9-ceb2-483a-b66c-ba2e8829245d became leader
	I1115 10:33:13.239104       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448285_c35e07f9-ceb2-483a-b66c-ba2e8829245d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-448285 -n old-k8s-version-448285
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-448285 -n old-k8s-version-448285: exit status 2 (366.213956ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-448285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-448285
helpers_test.go:243: (dbg) docker inspect old-k8s-version-448285:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a",
	        "Created": "2025-11-15T10:30:52.988114549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 697572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:32:11.620488955Z",
	            "FinishedAt": "2025-11-15T10:32:10.824705848Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/hosts",
	        "LogPath": "/var/lib/docker/containers/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a-json.log",
	        "Name": "/old-k8s-version-448285",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-448285:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-448285",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a",
	                "LowerDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/116820d197edeebf23a39258ee40debc02ab3090b549d9a51993c7ba7572d15a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-448285",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-448285/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-448285",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-448285",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-448285",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ea577443d47296b0bde358b023c4212aaf7984b0bc34031e2840d0f3a56877a4",
	            "SandboxKey": "/var/run/docker/netns/ea577443d472",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33787"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-448285": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:72:ef:13:61:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12ca3f9e094ca466512267d8c860ff57126abdec6db67bd16d9375e8738c15d5",
	                    "EndpointID": "9b1aa47e759866b8131a4e81e3a0c987261abee8b02d48f217db8ab86305d37f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-448285",
	                        "8d49869cd1fd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-448285 -n old-k8s-version-448285
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-448285 -n old-k8s-version-448285: exit status 2 (359.484591ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-448285 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-448285 logs -n 25: (1.290208829s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-864099 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo containerd config dump                                                                                                                                                                                                  │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo crio config                                                                                                                                                                                                             │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ delete  │ -p cilium-864099                                                                                                                                                                                                                              │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-env-683299 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-683299  │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p kubernetes-upgrade-480353                                                                                                                                                                                                                  │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-845026    │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-683299                                                                                                                                                                                                                   │ force-systemd-env-683299  │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p cert-options-115480 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-115480 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-115480 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-115480                                                                                                                                                                                                                        │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-448285 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-448285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ image   │ old-k8s-version-448285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-448285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:32:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:32:11.362431  697447 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:32:11.362582  697447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:32:11.362614  697447 out.go:374] Setting ErrFile to fd 2...
	I1115 10:32:11.362627  697447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:32:11.363593  697447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:32:11.364063  697447 out.go:368] Setting JSON to false
	I1115 10:32:11.364996  697447 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18883,"bootTime":1763183849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:32:11.365068  697447 start.go:143] virtualization:  
	I1115 10:32:11.366781  697447 out.go:179] * [old-k8s-version-448285] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:32:11.367985  697447 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:32:11.368066  697447 notify.go:221] Checking for updates...
	I1115 10:32:11.370378  697447 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:32:11.371435  697447 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:32:11.372473  697447 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:32:11.373578  697447 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:32:11.374591  697447 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:32:11.376165  697447 config.go:182] Loaded profile config "old-k8s-version-448285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:32:11.377819  697447 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1115 10:32:11.378781  697447 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:32:11.416963  697447 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:32:11.417118  697447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:32:11.477332  697447 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:32:11.467952076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:32:11.477440  697447 docker.go:319] overlay module found
	I1115 10:32:11.478766  697447 out.go:179] * Using the docker driver based on existing profile
	I1115 10:32:11.479827  697447 start.go:309] selected driver: docker
	I1115 10:32:11.479843  697447 start.go:930] validating driver "docker" against &{Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:32:11.479945  697447 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:32:11.480665  697447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:32:11.542605  697447 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:32:11.533752596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:32:11.542943  697447 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:32:11.542977  697447 cni.go:84] Creating CNI manager for ""
	I1115 10:32:11.543038  697447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:32:11.543087  697447 start.go:353] cluster config:
	{Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:32:11.544461  697447 out.go:179] * Starting "old-k8s-version-448285" primary control-plane node in "old-k8s-version-448285" cluster
	I1115 10:32:11.545470  697447 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:32:11.546479  697447 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:32:11.547513  697447 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:32:11.547560  697447 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 10:32:11.547573  697447 cache.go:65] Caching tarball of preloaded images
	I1115 10:32:11.547604  697447 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:32:11.547668  697447 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:32:11.547679  697447 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 10:32:11.547784  697447 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/config.json ...
	I1115 10:32:11.567170  697447 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:32:11.567190  697447 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:32:11.567202  697447 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:32:11.567224  697447 start.go:360] acquireMachinesLock for old-k8s-version-448285: {Name:mk5fdf42c0c76187fa0952dcaa2e938d4fb739c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:32:11.567273  697447 start.go:364] duration metric: took 32.418µs to acquireMachinesLock for "old-k8s-version-448285"
	I1115 10:32:11.567292  697447 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:32:11.567298  697447 fix.go:54] fixHost starting: 
	I1115 10:32:11.567554  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:11.583749  697447 fix.go:112] recreateIfNeeded on old-k8s-version-448285: state=Stopped err=<nil>
	W1115 10:32:11.583777  697447 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:32:11.585105  697447 out.go:252] * Restarting existing docker container for "old-k8s-version-448285" ...
	I1115 10:32:11.585194  697447 cli_runner.go:164] Run: docker start old-k8s-version-448285
	I1115 10:32:11.846786  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:11.871237  697447 kic.go:430] container "old-k8s-version-448285" state is running.
	I1115 10:32:11.871625  697447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-448285
	I1115 10:32:11.896226  697447 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/config.json ...
	I1115 10:32:11.896440  697447 machine.go:94] provisionDockerMachine start ...
	I1115 10:32:11.896593  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:11.915876  697447 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:11.916218  697447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33784 <nil> <nil>}
	I1115 10:32:11.916228  697447 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:32:11.916874  697447 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55302->127.0.0.1:33784: read: connection reset by peer
	I1115 10:32:15.077327  697447 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448285
	
	I1115 10:32:15.077349  697447 ubuntu.go:182] provisioning hostname "old-k8s-version-448285"
	I1115 10:32:15.077437  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:15.096311  697447 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:15.096631  697447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33784 <nil> <nil>}
	I1115 10:32:15.096649  697447 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-448285 && echo "old-k8s-version-448285" | sudo tee /etc/hostname
	I1115 10:32:15.259159  697447 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-448285
	
	I1115 10:32:15.259236  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:15.276483  697447 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:15.276900  697447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33784 <nil> <nil>}
	I1115 10:32:15.276921  697447 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-448285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-448285/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-448285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:32:15.426369  697447 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:32:15.426434  697447 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:32:15.426470  697447 ubuntu.go:190] setting up certificates
	I1115 10:32:15.426499  697447 provision.go:84] configureAuth start
	I1115 10:32:15.426602  697447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-448285
	I1115 10:32:15.444146  697447 provision.go:143] copyHostCerts
	I1115 10:32:15.444211  697447 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:32:15.444234  697447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:32:15.444311  697447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:32:15.444565  697447 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:32:15.444574  697447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:32:15.444611  697447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:32:15.444720  697447 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:32:15.444725  697447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:32:15.444751  697447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:32:15.444799  697447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-448285 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-448285]
	I1115 10:32:15.784566  697447 provision.go:177] copyRemoteCerts
	I1115 10:32:15.784639  697447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:32:15.784683  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:15.803346  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:15.909992  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:32:15.928338  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 10:32:15.947363  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:32:15.966209  697447 provision.go:87] duration metric: took 539.661617ms to configureAuth
	I1115 10:32:15.966238  697447 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:32:15.966429  697447 config.go:182] Loaded profile config "old-k8s-version-448285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:32:15.966547  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:15.984795  697447 main.go:143] libmachine: Using SSH client type: native
	I1115 10:32:15.985188  697447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33784 <nil> <nil>}
	I1115 10:32:15.985207  697447 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:32:16.295918  697447 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:32:16.296010  697447 machine.go:97] duration metric: took 4.399560663s to provisionDockerMachine
	I1115 10:32:16.296045  697447 start.go:293] postStartSetup for "old-k8s-version-448285" (driver="docker")
	I1115 10:32:16.296094  697447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:32:16.296240  697447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:32:16.296331  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:16.319278  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:16.425486  697447 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:32:16.428770  697447 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:32:16.428800  697447 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:32:16.428812  697447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:32:16.428867  697447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:32:16.428947  697447 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:32:16.429057  697447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:32:16.436538  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:32:16.454414  697447 start.go:296] duration metric: took 158.319914ms for postStartSetup
	I1115 10:32:16.454506  697447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:32:16.454550  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:16.470472  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:16.570216  697447 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:32:16.576614  697447 fix.go:56] duration metric: took 5.009308893s for fixHost
	I1115 10:32:16.576640  697447 start.go:83] releasing machines lock for "old-k8s-version-448285", held for 5.009359649s
	I1115 10:32:16.576714  697447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-448285
	I1115 10:32:16.593791  697447 ssh_runner.go:195] Run: cat /version.json
	I1115 10:32:16.593856  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:16.594130  697447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:32:16.594211  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:16.625047  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:16.633732  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:16.816371  697447 ssh_runner.go:195] Run: systemctl --version
	I1115 10:32:16.822882  697447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:32:16.857564  697447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:32:16.862277  697447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:32:16.862341  697447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:32:16.869548  697447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:32:16.869568  697447 start.go:496] detecting cgroup driver to use...
	I1115 10:32:16.869648  697447 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:32:16.869706  697447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:32:16.884740  697447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:32:16.898636  697447 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:32:16.898696  697447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:32:16.913440  697447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:32:16.926108  697447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:32:17.049825  697447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:32:17.177782  697447 docker.go:234] disabling docker service ...
	I1115 10:32:17.177859  697447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:32:17.195315  697447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:32:17.208878  697447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:32:17.324116  697447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:32:17.450411  697447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:32:17.465979  697447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:32:17.481364  697447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1115 10:32:17.481460  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.490482  697447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:32:17.490572  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.499841  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.508818  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.517442  697447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:32:17.525332  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.534843  697447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.543138  697447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:32:17.552520  697447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:32:17.559678  697447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:32:17.566651  697447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:17.681697  697447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:32:17.815736  697447 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:32:17.815807  697447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:32:17.819583  697447 start.go:564] Will wait 60s for crictl version
	I1115 10:32:17.819712  697447 ssh_runner.go:195] Run: which crictl
	I1115 10:32:17.823390  697447 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:32:17.849095  697447 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:32:17.849274  697447 ssh_runner.go:195] Run: crio --version
	I1115 10:32:17.877704  697447 ssh_runner.go:195] Run: crio --version
	I1115 10:32:17.916588  697447 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1115 10:32:17.919380  697447 cli_runner.go:164] Run: docker network inspect old-k8s-version-448285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:32:17.934736  697447 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:32:17.938432  697447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:32:17.947985  697447 kubeadm.go:884] updating cluster {Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:32:17.948113  697447 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 10:32:17.948170  697447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:32:17.981529  697447 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:32:17.981553  697447 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:32:17.981645  697447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:32:18.009169  697447 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:32:18.009201  697447 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:32:18.009210  697447 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1115 10:32:18.009308  697447 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-448285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:32:18.009410  697447 ssh_runner.go:195] Run: crio config
	I1115 10:32:18.081434  697447 cni.go:84] Creating CNI manager for ""
	I1115 10:32:18.081461  697447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:32:18.081508  697447 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:32:18.081539  697447 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-448285 NodeName:old-k8s-version-448285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:32:18.081760  697447 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-448285"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:32:18.081862  697447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1115 10:32:18.090345  697447 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:32:18.090466  697447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:32:18.098436  697447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1115 10:32:18.111607  697447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:32:18.124035  697447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1115 10:32:18.137169  697447 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:32:18.140745  697447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:32:18.150290  697447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:18.273724  697447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:32:18.291725  697447 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285 for IP: 192.168.85.2
	I1115 10:32:18.291747  697447 certs.go:195] generating shared ca certs ...
	I1115 10:32:18.291762  697447 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:18.291980  697447 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:32:18.292058  697447 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:32:18.292080  697447 certs.go:257] generating profile certs ...
	I1115 10:32:18.292198  697447 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.key
	I1115 10:32:18.292295  697447 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key.28437dd2
	I1115 10:32:18.292376  697447 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.key
	I1115 10:32:18.292518  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:32:18.292569  697447 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:32:18.292583  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:32:18.292619  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:32:18.292685  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:32:18.292714  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:32:18.292790  697447 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:32:18.293387  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:32:18.313137  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:32:18.330831  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:32:18.352976  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:32:18.369635  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 10:32:18.390179  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:32:18.407650  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:32:18.430857  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:32:18.455089  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:32:18.475322  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:32:18.500133  697447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:32:18.528431  697447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:32:18.541908  697447 ssh_runner.go:195] Run: openssl version
	I1115 10:32:18.550461  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:32:18.560706  697447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:32:18.565423  697447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:32:18.565491  697447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:32:18.613300  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:32:18.622238  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:32:18.643445  697447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:18.647885  697447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:18.647951  697447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:32:18.690761  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:32:18.699174  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:32:18.707433  697447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:32:18.711445  697447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:32:18.711535  697447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:32:18.753417  697447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:32:18.761354  697447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:32:18.765638  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:32:18.806436  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:32:18.848855  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:32:18.890414  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:32:18.931298  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:32:18.980932  697447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:32:19.039105  697447 kubeadm.go:401] StartCluster: {Name:old-k8s-version-448285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-448285 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:32:19.039232  697447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:32:19.039341  697447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:32:19.098131  697447 cri.go:89] found id: "a04207eaf351ed416e1ffa4bbbeb6745b4452a7b6c7658f55d11c230a8c97f49"
	I1115 10:32:19.098155  697447 cri.go:89] found id: "1a5cd2047b2ca328ac1490d683433d4b097cc6de151a17be5169e091591dc7cf"
	I1115 10:32:19.098159  697447 cri.go:89] found id: ""
	I1115 10:32:19.098212  697447 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:32:19.132629  697447 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:32:19Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:32:19.132789  697447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:32:19.151922  697447 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:32:19.151986  697447 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:32:19.152079  697447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:32:19.175009  697447 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:32:19.175648  697447 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-448285" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:32:19.175988  697447 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-514793/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-448285" cluster setting kubeconfig missing "old-k8s-version-448285" context setting]
	I1115 10:32:19.176594  697447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:19.178391  697447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:32:19.197615  697447 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:32:19.197696  697447 kubeadm.go:602] duration metric: took 45.688385ms to restartPrimaryControlPlane
	I1115 10:32:19.197720  697447 kubeadm.go:403] duration metric: took 158.626169ms to StartCluster
	I1115 10:32:19.197762  697447 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:19.197854  697447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:32:19.198835  697447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:32:19.199108  697447 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:32:19.199539  697447 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:32:19.199613  697447 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-448285"
	I1115 10:32:19.199630  697447 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-448285"
	W1115 10:32:19.199637  697447 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:32:19.199660  697447 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:32:19.200266  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:19.200534  697447 config.go:182] Loaded profile config "old-k8s-version-448285": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:32:19.200627  697447 addons.go:70] Setting dashboard=true in profile "old-k8s-version-448285"
	I1115 10:32:19.200657  697447 addons.go:239] Setting addon dashboard=true in "old-k8s-version-448285"
	W1115 10:32:19.200695  697447 addons.go:248] addon dashboard should already be in state true
	I1115 10:32:19.200735  697447 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:32:19.201203  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:19.203946  697447 out.go:179] * Verifying Kubernetes components...
	I1115 10:32:19.204197  697447 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-448285"
	I1115 10:32:19.204233  697447 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-448285"
	I1115 10:32:19.204578  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:19.208987  697447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:32:19.249688  697447 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:32:19.253704  697447 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:32:19.256637  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:32:19.256662  697447 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:32:19.256732  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:19.272418  697447 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:32:19.273531  697447 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-448285"
	W1115 10:32:19.273549  697447 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:32:19.273574  697447 host.go:66] Checking if "old-k8s-version-448285" exists ...
	I1115 10:32:19.274230  697447 cli_runner.go:164] Run: docker container inspect old-k8s-version-448285 --format={{.State.Status}}
	I1115 10:32:19.277680  697447 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:32:19.277702  697447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:32:19.277762  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:19.313897  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:19.325553  697447 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:32:19.325574  697447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:32:19.325700  697447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-448285
	I1115 10:32:19.332688  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:19.358224  697447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33784 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/old-k8s-version-448285/id_rsa Username:docker}
	I1115 10:32:19.563343  697447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:32:19.583309  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:32:19.583382  697447 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:32:19.603303  697447 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-448285" to be "Ready" ...
	I1115 10:32:19.624835  697447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:32:19.627435  697447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:32:19.635553  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:32:19.635628  697447 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:32:19.672211  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:32:19.672289  697447 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:32:19.737531  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:32:19.737625  697447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:32:19.803217  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:32:19.803291  697447 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:32:19.896773  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:32:19.896801  697447 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:32:19.929428  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:32:19.929455  697447 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:32:19.951355  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:32:19.951380  697447 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:32:19.976058  697447 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:32:19.976083  697447 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:32:19.995725  697447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:32:23.936309  697447 node_ready.go:49] node "old-k8s-version-448285" is "Ready"
	I1115 10:32:23.936340  697447 node_ready.go:38] duration metric: took 4.332950026s for node "old-k8s-version-448285" to be "Ready" ...
	I1115 10:32:23.936353  697447 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:32:23.936412  697447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:32:25.033498  697447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.408555009s)
	I1115 10:32:25.632591  697447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.005070186s)
	I1115 10:32:26.225139  697447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.229372615s)
	I1115 10:32:26.225189  697447 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.288756726s)
	I1115 10:32:26.225381  697447 api_server.go:72] duration metric: took 7.02620682s to wait for apiserver process to appear ...
	I1115 10:32:26.225391  697447 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:32:26.225409  697447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:32:26.228381  697447 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-448285 addons enable metrics-server
	
	I1115 10:32:26.231369  697447 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:32:26.234278  697447 addons.go:515] duration metric: took 7.034728496s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:32:26.235500  697447 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 10:32:26.236941  697447 api_server.go:141] control plane version: v1.28.0
	I1115 10:32:26.236965  697447 api_server.go:131] duration metric: took 11.567262ms to wait for apiserver health ...
	I1115 10:32:26.236977  697447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:32:26.240570  697447 system_pods.go:59] 8 kube-system pods found
	I1115 10:32:26.240609  697447 system_pods.go:61] "coredns-5dd5756b68-6rz72" [1b9cd2bc-b240-497e-8cd9-6ebb31c76230] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:32:26.240620  697447 system_pods.go:61] "etcd-old-k8s-version-448285" [e0971773-43bd-4354-9a09-4b3423d890e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:32:26.240627  697447 system_pods.go:61] "kindnet-4sxqn" [15858d1d-82c8-4f57-b984-24a45188650c] Running
	I1115 10:32:26.240634  697447 system_pods.go:61] "kube-apiserver-old-k8s-version-448285" [79da2b0a-4965-4f39-b9a1-435376166c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:32:26.240645  697447 system_pods.go:61] "kube-controller-manager-old-k8s-version-448285" [676943af-e821-4997-9130-bf6cce8685ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:32:26.240653  697447 system_pods.go:61] "kube-proxy-5pzbj" [5a143b70-c11c-48d7-8cc3-9881bdd32a70] Running
	I1115 10:32:26.240660  697447 system_pods.go:61] "kube-scheduler-old-k8s-version-448285" [b045298c-16ca-418e-8622-fe1ab709a966] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:32:26.240665  697447 system_pods.go:61] "storage-provisioner" [ecc31eb8-2cae-47f4-9c85-8dbc48b1d546] Running
	I1115 10:32:26.240674  697447 system_pods.go:74] duration metric: took 3.668534ms to wait for pod list to return data ...
	I1115 10:32:26.240684  697447 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:32:26.242949  697447 default_sa.go:45] found service account: "default"
	I1115 10:32:26.242974  697447 default_sa.go:55] duration metric: took 2.28477ms for default service account to be created ...
	I1115 10:32:26.242984  697447 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:32:26.246108  697447 system_pods.go:86] 8 kube-system pods found
	I1115 10:32:26.246137  697447 system_pods.go:89] "coredns-5dd5756b68-6rz72" [1b9cd2bc-b240-497e-8cd9-6ebb31c76230] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:32:26.246145  697447 system_pods.go:89] "etcd-old-k8s-version-448285" [e0971773-43bd-4354-9a09-4b3423d890e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:32:26.246151  697447 system_pods.go:89] "kindnet-4sxqn" [15858d1d-82c8-4f57-b984-24a45188650c] Running
	I1115 10:32:26.246158  697447 system_pods.go:89] "kube-apiserver-old-k8s-version-448285" [79da2b0a-4965-4f39-b9a1-435376166c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:32:26.246168  697447 system_pods.go:89] "kube-controller-manager-old-k8s-version-448285" [676943af-e821-4997-9130-bf6cce8685ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:32:26.246179  697447 system_pods.go:89] "kube-proxy-5pzbj" [5a143b70-c11c-48d7-8cc3-9881bdd32a70] Running
	I1115 10:32:26.246186  697447 system_pods.go:89] "kube-scheduler-old-k8s-version-448285" [b045298c-16ca-418e-8622-fe1ab709a966] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:32:26.246190  697447 system_pods.go:89] "storage-provisioner" [ecc31eb8-2cae-47f4-9c85-8dbc48b1d546] Running
	I1115 10:32:26.246200  697447 system_pods.go:126] duration metric: took 3.211144ms to wait for k8s-apps to be running ...
	I1115 10:32:26.246207  697447 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:32:26.246268  697447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:32:26.260808  697447 system_svc.go:56] duration metric: took 14.592138ms WaitForService to wait for kubelet
	I1115 10:32:26.260874  697447 kubeadm.go:587] duration metric: took 7.061707992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:32:26.260910  697447 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:32:26.263757  697447 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:32:26.263796  697447 node_conditions.go:123] node cpu capacity is 2
	I1115 10:32:26.263809  697447 node_conditions.go:105] duration metric: took 2.879305ms to run NodePressure ...
	I1115 10:32:26.263821  697447 start.go:242] waiting for startup goroutines ...
	I1115 10:32:26.263828  697447 start.go:247] waiting for cluster config update ...
	I1115 10:32:26.263840  697447 start.go:256] writing updated cluster config ...
	I1115 10:32:26.264155  697447 ssh_runner.go:195] Run: rm -f paused
	I1115 10:32:26.268218  697447 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:32:26.272516  697447 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6rz72" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:32:28.277989  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:30.279852  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:32.778841  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:35.279078  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:37.280183  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:39.781765  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:42.282493  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:44.779117  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:47.279418  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:49.778858  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:51.778960  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:54.278367  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:56.278760  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:32:58.778325  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	W1115 10:33:00.778463  697447 pod_ready.go:104] pod "coredns-5dd5756b68-6rz72" is not "Ready", error: <nil>
	I1115 10:33:02.278872  697447 pod_ready.go:94] pod "coredns-5dd5756b68-6rz72" is "Ready"
	I1115 10:33:02.278904  697447 pod_ready.go:86] duration metric: took 36.00636396s for pod "coredns-5dd5756b68-6rz72" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.282056  697447 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.287189  697447 pod_ready.go:94] pod "etcd-old-k8s-version-448285" is "Ready"
	I1115 10:33:02.287259  697447 pod_ready.go:86] duration metric: took 5.181083ms for pod "etcd-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.290384  697447 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.295494  697447 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-448285" is "Ready"
	I1115 10:33:02.295523  697447 pod_ready.go:86] duration metric: took 5.115206ms for pod "kube-apiserver-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.298698  697447 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.476397  697447 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-448285" is "Ready"
	I1115 10:33:02.476476  697447 pod_ready.go:86] duration metric: took 177.751064ms for pod "kube-controller-manager-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:02.677273  697447 pod_ready.go:83] waiting for pod "kube-proxy-5pzbj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:03.081524  697447 pod_ready.go:94] pod "kube-proxy-5pzbj" is "Ready"
	I1115 10:33:03.081626  697447 pod_ready.go:86] duration metric: took 404.325062ms for pod "kube-proxy-5pzbj" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:03.276232  697447 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:03.676818  697447 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-448285" is "Ready"
	I1115 10:33:03.676845  697447 pod_ready.go:86] duration metric: took 400.585892ms for pod "kube-scheduler-old-k8s-version-448285" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:33:03.676859  697447 pod_ready.go:40] duration metric: took 37.408600703s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:33:03.732211  697447 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1115 10:33:03.735402  697447 out.go:203] 
	W1115 10:33:03.738391  697447 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 10:33:03.741246  697447 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:33:03.744289  697447 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-448285" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.528615121Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=78a43f5e-c520-49ce-b34b-1d318d64d9f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.529482986Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6abd7d5e-7809-4a7c-8929-c4330e41a51e name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.530895878Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27/dashboard-metrics-scraper" id=ff970252-3e9f-405c-8326-02261f54086e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.531027181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.542284035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.542817911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.559862871Z" level=info msg="Created container a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27/dashboard-metrics-scraper" id=ff970252-3e9f-405c-8326-02261f54086e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.560519296Z" level=info msg="Starting container: a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e" id=570f5860-92dd-41d0-afa6-dfdea1e5863c name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.56247376Z" level=info msg="Started container" PID=1632 containerID=a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27/dashboard-metrics-scraper id=570f5860-92dd-41d0-afa6-dfdea1e5863c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ee3fdfbc25acb09784c25e627d4d58ba90263e0a820f90d0b5624a456043b19
	Nov 15 10:32:58 old-k8s-version-448285 conmon[1630]: conmon a4a632917a903434c332 <ninfo>: container 1632 exited with status 1
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.696866453Z" level=info msg="Removing container: 38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215" id=87c584a5-7b8f-4991-88c5-0af67a532ac2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.709203654Z" level=info msg="Error loading conmon cgroup of container 38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215: cgroup deleted" id=87c584a5-7b8f-4991-88c5-0af67a532ac2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:32:58 old-k8s-version-448285 crio[649]: time="2025-11-15T10:32:58.712286858Z" level=info msg="Removed container 38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27/dashboard-metrics-scraper" id=87c584a5-7b8f-4991-88c5-0af67a532ac2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.337854332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.34413849Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.344174485Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.344196671Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.347398092Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.347430961Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.347456494Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.350341214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.350370867Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.350395531Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.35353544Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:33:05 old-k8s-version-448285 crio[649]: time="2025-11-15T10:33:05.353563656Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a4a632917a903       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   9ee3fdfbc25ac       dashboard-metrics-scraper-5f989dc9cf-npv27       kubernetes-dashboard
	eface29866b4e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   cd657cfaaa433       storage-provisioner                              kube-system
	84fde01cef2dc       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   73d10af3b5e94       kubernetes-dashboard-8694d4445c-l44x5            kubernetes-dashboard
	bdf2a38455c8b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   b0eddf057d964       busybox                                          default
	ece1d63316e09       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   14af18612e897       coredns-5dd5756b68-6rz72                         kube-system
	4313ada3fc01b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   87460fd466a35       kube-proxy-5pzbj                                 kube-system
	0c4a767334702       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   4e311c55753a3       kindnet-4sxqn                                    kube-system
	39b0b5e22ac1a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   cd657cfaaa433       storage-provisioner                              kube-system
	cdd43d36d844c       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   421c314e1c37c       kube-controller-manager-old-k8s-version-448285   kube-system
	a04207eaf351e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   b58c32f5e9e31       kube-apiserver-old-k8s-version-448285            kube-system
	d18121baff438       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   1544d8b851a9f       kube-scheduler-old-k8s-version-448285            kube-system
	1a5cd2047b2ca       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   d3a78f3914063       etcd-old-k8s-version-448285                      kube-system
	
	
	==> coredns [ece1d63316e0920d4b4f13ed85133a023dc0060eec2887ccb385777009fb3bc3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51068 - 5718 "HINFO IN 6291190729975920619.953597467972446564. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022733665s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-448285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-448285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=old-k8s-version-448285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_31_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:31:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-448285
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:33:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:32:54 +0000   Sat, 15 Nov 2025 10:31:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:32:54 +0000   Sat, 15 Nov 2025 10:31:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:32:54 +0000   Sat, 15 Nov 2025 10:31:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:32:54 +0000   Sat, 15 Nov 2025 10:31:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-448285
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                992b5604-a676-4f5a-a947-58bb000cddf9
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-6rz72                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-448285                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-4sxqn                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-448285             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-448285    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-5pzbj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-448285             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-npv27        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-l44x5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-448285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                   kubelet          Node old-k8s-version-448285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                   kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m2s                   kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-448285 event: Registered Node old-k8s-version-448285 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-448285 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node old-k8s-version-448285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node old-k8s-version-448285 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-448285 event: Registered Node old-k8s-version-448285 in Controller
	
	
	==> dmesg <==
	[Nov15 10:05] overlayfs: idmapped layers are currently not supported
	[Nov15 10:09] overlayfs: idmapped layers are currently not supported
	[Nov15 10:10] overlayfs: idmapped layers are currently not supported
	[Nov15 10:11] overlayfs: idmapped layers are currently not supported
	[Nov15 10:12] overlayfs: idmapped layers are currently not supported
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1a5cd2047b2ca328ac1490d683433d4b097cc6de151a17be5169e091591dc7cf] <==
	{"level":"info","ts":"2025-11-15T10:32:19.335669Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T10:32:19.335827Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-15T10:32:19.337986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-15T10:32:19.339605Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-15T10:32:19.349754Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:32:19.353241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T10:32:19.397751Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-15T10:32:19.397923Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T10:32:19.397932Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T10:32:19.399838Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-15T10:32:19.399868Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T10:32:20.524664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-15T10:32:20.5248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-15T10:32:20.524853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T10:32:20.524907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-15T10:32:20.524942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T10:32:20.524992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-15T10:32:20.525037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T10:32:20.52764Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-448285 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T10:32:20.527879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:32:20.528074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T10:32:20.528167Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T10:32:20.528261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T10:32:20.529009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T10:32:20.529201Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 10:33:21 up  5:15,  0 user,  load average: 1.80, 3.00, 2.68
	Linux old-k8s-version-448285 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c4a76733470254348c10dd1fa258a14f22a9cff5d0003ab9b429eb6979709b0] <==
	I1115 10:32:25.112324       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:32:25.112582       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:32:25.112698       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:32:25.112709       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:32:25.112719       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:32:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:32:25.344134       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:32:25.344158       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:32:25.344166       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:32:25.344292       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:32:55.335670       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:32:55.342748       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:32:55.342905       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:32:55.344181       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:32:56.644680       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:32:56.644729       1 metrics.go:72] Registering metrics
	I1115 10:32:56.644790       1 controller.go:711] "Syncing nftables rules"
	I1115 10:33:05.337492       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:33:05.337564       1 main.go:301] handling current node
	I1115 10:33:15.337316       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:33:15.337429       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a04207eaf351ed416e1ffa4bbbeb6745b4452a7b6c7658f55d11c230a8c97f49] <==
	I1115 10:32:23.720696       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1115 10:32:24.012934       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 10:32:24.012988       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1115 10:32:24.014195       1 shared_informer.go:318] Caches are synced for configmaps
	I1115 10:32:24.044396       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:32:24.058806       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1115 10:32:24.059721       1 aggregator.go:166] initial CRD sync complete...
	I1115 10:32:24.059746       1 autoregister_controller.go:141] Starting autoregister controller
	I1115 10:32:24.059753       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:32:24.059760       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:32:24.061847       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1115 10:32:24.061872       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1115 10:32:24.079308       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1115 10:32:24.101767       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:32:24.726243       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:32:26.040429       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 10:32:26.090349       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 10:32:26.115243       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:32:26.127411       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:32:26.137451       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 10:32:26.199545       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.122.85"}
	I1115 10:32:26.217656       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.116.220"}
	I1115 10:32:35.654330       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1115 10:32:35.912088       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 10:32:35.961526       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cdd43d36d844ca41497db349276a208a21531ba8c1b2993cabbb0595b94c98eb] <==
	I1115 10:32:35.756902       1 shared_informer.go:318] Caches are synced for endpoint
	I1115 10:32:35.761931       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:32:35.765676       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 10:32:35.767735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="93.093µs"
	I1115 10:32:35.769507       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1115 10:32:35.771411       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1115 10:32:35.790546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.709223ms"
	I1115 10:32:35.790654       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1115 10:32:35.791471       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1115 10:32:35.792273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="600.73µs"
	I1115 10:32:35.796878       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1115 10:32:35.804813       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1115 10:32:35.847143       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1115 10:32:36.182682       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:32:36.182714       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 10:32:36.236733       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 10:32:40.647397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.984µs"
	I1115 10:32:41.655685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.426µs"
	I1115 10:32:42.663011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.213µs"
	I1115 10:32:45.678644       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.804979ms"
	I1115 10:32:45.680017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.574µs"
	I1115 10:32:58.708946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.905µs"
	I1115 10:33:02.064085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.336935ms"
	I1115 10:33:02.064217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.095µs"
	I1115 10:33:06.027407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.318µs"
	
	
	==> kube-proxy [4313ada3fc01b4d192454a21182be5e6fd97a5a0c54bb1af6a233af293f09eee] <==
	I1115 10:32:25.438612       1 server_others.go:69] "Using iptables proxy"
	I1115 10:32:25.466225       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1115 10:32:25.487396       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:32:25.489569       1 server_others.go:152] "Using iptables Proxier"
	I1115 10:32:25.489908       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 10:32:25.489921       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 10:32:25.489955       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 10:32:25.490194       1 server.go:846] "Version info" version="v1.28.0"
	I1115 10:32:25.490221       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:32:25.491648       1 config.go:188] "Starting service config controller"
	I1115 10:32:25.492760       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 10:32:25.492850       1 config.go:97] "Starting endpoint slice config controller"
	I1115 10:32:25.492881       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 10:32:25.495617       1 config.go:315] "Starting node config controller"
	I1115 10:32:25.496550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 10:32:25.595244       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 10:32:25.595306       1 shared_informer.go:318] Caches are synced for service config
	I1115 10:32:25.596939       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d18121baff438aa2b63e503bb022ccc4a3ae97ae800bce558ce414199861e699] <==
	I1115 10:32:22.114814       1 serving.go:348] Generated self-signed cert in-memory
	W1115 10:32:23.842931       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:32:23.842957       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:32:23.842965       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:32:23.842972       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:32:23.963348       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1115 10:32:23.963466       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:32:23.965458       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1115 10:32:23.968151       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:32:23.968236       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 10:32:23.968280       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1115 10:32:24.069823       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 10:32:35 old-k8s-version-448285 kubelet[778]: I1115 10:32:35.820299     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6rkb\" (UniqueName: \"kubernetes.io/projected/99aa18b9-d4b8-43be-9700-bdbd82aaf4fd-kube-api-access-p6rkb\") pod \"dashboard-metrics-scraper-5f989dc9cf-npv27\" (UID: \"99aa18b9-d4b8-43be-9700-bdbd82aaf4fd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27"
	Nov 15 10:32:35 old-k8s-version-448285 kubelet[778]: I1115 10:32:35.921571     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjlm\" (UniqueName: \"kubernetes.io/projected/eb4f1b09-dbba-4d40-a2fa-e31fbc421449-kube-api-access-fcjlm\") pod \"kubernetes-dashboard-8694d4445c-l44x5\" (UID: \"eb4f1b09-dbba-4d40-a2fa-e31fbc421449\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l44x5"
	Nov 15 10:32:35 old-k8s-version-448285 kubelet[778]: I1115 10:32:35.921834     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eb4f1b09-dbba-4d40-a2fa-e31fbc421449-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-l44x5\" (UID: \"eb4f1b09-dbba-4d40-a2fa-e31fbc421449\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l44x5"
	Nov 15 10:32:36 old-k8s-version-448285 kubelet[778]: W1115 10:32:36.042039     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/crio-9ee3fdfbc25acb09784c25e627d4d58ba90263e0a820f90d0b5624a456043b19 WatchSource:0}: Error finding container 9ee3fdfbc25acb09784c25e627d4d58ba90263e0a820f90d0b5624a456043b19: Status 404 returned error can't find the container with id 9ee3fdfbc25acb09784c25e627d4d58ba90263e0a820f90d0b5624a456043b19
	Nov 15 10:32:36 old-k8s-version-448285 kubelet[778]: W1115 10:32:36.354080     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d49869cd1fd38b2d21c4fbbeab8cb606f4e4956925d385bf1494b26c0b8659a/crio-73d10af3b5e9463187b7fa34fc52a8f097e301718e2e768dfdd84bc2299f321d WatchSource:0}: Error finding container 73d10af3b5e9463187b7fa34fc52a8f097e301718e2e768dfdd84bc2299f321d: Status 404 returned error can't find the container with id 73d10af3b5e9463187b7fa34fc52a8f097e301718e2e768dfdd84bc2299f321d
	Nov 15 10:32:40 old-k8s-version-448285 kubelet[778]: I1115 10:32:40.631153     778 scope.go:117] "RemoveContainer" containerID="e22560aa487e7a82bb38fa8c0ad73c6d5e9da25baf6742e186573f180e28cffb"
	Nov 15 10:32:41 old-k8s-version-448285 kubelet[778]: I1115 10:32:41.637849     778 scope.go:117] "RemoveContainer" containerID="e22560aa487e7a82bb38fa8c0ad73c6d5e9da25baf6742e186573f180e28cffb"
	Nov 15 10:32:41 old-k8s-version-448285 kubelet[778]: I1115 10:32:41.638157     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:41 old-k8s-version-448285 kubelet[778]: E1115 10:32:41.638422     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:32:42 old-k8s-version-448285 kubelet[778]: I1115 10:32:42.641894     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:42 old-k8s-version-448285 kubelet[778]: E1115 10:32:42.642758     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:32:46 old-k8s-version-448285 kubelet[778]: I1115 10:32:46.010440     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:46 old-k8s-version-448285 kubelet[778]: E1115 10:32:46.010771     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:32:55 old-k8s-version-448285 kubelet[778]: I1115 10:32:55.674851     778 scope.go:117] "RemoveContainer" containerID="39b0b5e22ac1ad3090120e07a964d83e2d624c345bef9256af9b7627790dcfd2"
	Nov 15 10:32:55 old-k8s-version-448285 kubelet[778]: I1115 10:32:55.708736     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l44x5" podStartSLOduration=11.785265772 podCreationTimestamp="2025-11-15 10:32:35 +0000 UTC" firstStartedPulling="2025-11-15 10:32:36.35859146 +0000 UTC m=+18.064630084" lastFinishedPulling="2025-11-15 10:32:45.28130617 +0000 UTC m=+26.987344794" observedRunningTime="2025-11-15 10:32:45.668556275 +0000 UTC m=+27.374594899" watchObservedRunningTime="2025-11-15 10:32:55.707980482 +0000 UTC m=+37.414019105"
	Nov 15 10:32:58 old-k8s-version-448285 kubelet[778]: I1115 10:32:58.528056     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:58 old-k8s-version-448285 kubelet[778]: I1115 10:32:58.685682     778 scope.go:117] "RemoveContainer" containerID="38f45ea5e6d5dae55654dd6002215e0866fd49d0ef9d62338f774665d57e1215"
	Nov 15 10:32:58 old-k8s-version-448285 kubelet[778]: I1115 10:32:58.685937     778 scope.go:117] "RemoveContainer" containerID="a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e"
	Nov 15 10:32:58 old-k8s-version-448285 kubelet[778]: E1115 10:32:58.686207     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:33:06 old-k8s-version-448285 kubelet[778]: I1115 10:33:06.010804     778 scope.go:117] "RemoveContainer" containerID="a4a632917a903434c332778cab25a487262760e4d26ba53f90b95d908f526f7e"
	Nov 15 10:33:06 old-k8s-version-448285 kubelet[778]: E1115 10:33:06.011128     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-npv27_kubernetes-dashboard(99aa18b9-d4b8-43be-9700-bdbd82aaf4fd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-npv27" podUID="99aa18b9-d4b8-43be-9700-bdbd82aaf4fd"
	Nov 15 10:33:17 old-k8s-version-448285 kubelet[778]: I1115 10:33:17.006321     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 15 10:33:17 old-k8s-version-448285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:33:17 old-k8s-version-448285 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:33:17 old-k8s-version-448285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [84fde01cef2dc7f7c8f204de97b73b47e569756843d2f46859eb14b56140d6fd] <==
	2025/11/15 10:32:45 Starting overwatch
	2025/11/15 10:32:45 Using namespace: kubernetes-dashboard
	2025/11/15 10:32:45 Using in-cluster config to connect to apiserver
	2025/11/15 10:32:45 Using secret token for csrf signing
	2025/11/15 10:32:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:32:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:32:45 Successful initial request to the apiserver, version: v1.28.0
	2025/11/15 10:32:45 Generating JWE encryption key
	2025/11/15 10:32:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:32:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:32:45 Initializing JWE encryption key from synchronized object
	2025/11/15 10:32:45 Creating in-cluster Sidecar client
	2025/11/15 10:32:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:32:45 Serving insecurely on HTTP port: 9090
	2025/11/15 10:33:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [39b0b5e22ac1ad3090120e07a964d83e2d624c345bef9256af9b7627790dcfd2] <==
	I1115 10:32:25.261485       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:32:55.263283       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [eface29866b4e016367d33114ac59baf51eba41f487b4403f1437106ef9f4c88] <==
	I1115 10:32:55.722466       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:32:55.738406       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:32:55.738463       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 10:33:13.137953       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:33:13.138178       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448285_c35e07f9-ceb2-483a-b66c-ba2e8829245d!
	I1115 10:33:13.139487       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85aebc5f-7825-4f4f-9b86-fd9c4a02df82", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-448285_c35e07f9-ceb2-483a-b66c-ba2e8829245d became leader
	I1115 10:33:13.239104       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-448285_c35e07f9-ceb2-483a-b66c-ba2e8829245d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-448285 -n old-k8s-version-448285
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-448285 -n old-k8s-version-448285: exit status 2 (394.142206ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-448285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (271.531084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:34:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-907610 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-907610 describe deploy/metrics-server -n kube-system: exit status 1 (97.995757ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-907610 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-907610
helpers_test.go:243: (dbg) docker inspect no-preload-907610:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe",
	        "Created": "2025-11-15T10:33:27.637520569Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 701564,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:33:27.770941354Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/hosts",
	        "LogPath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe-json.log",
	        "Name": "/no-preload-907610",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-907610:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-907610",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe",
	                "LowerDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-907610",
	                "Source": "/var/lib/docker/volumes/no-preload-907610/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-907610",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-907610",
	                "name.minikube.sigs.k8s.io": "no-preload-907610",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8e8264921c59a312ee859bdc08cc851e93c52db2561b3b432810a9a7410a2c4",
	            "SandboxKey": "/var/run/docker/netns/a8e8264921c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33789"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33790"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-907610": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:97:29:77:66:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0446e1129b53a450726fcb48f165692a586d6e4eabe7e4a70c1e31a89bd483dd",
	                    "EndpointID": "6a7443863db85d0993961f9dc08122f4467e1ca2bf8eadc696018d1bc084ee4f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-907610",
	                        "10054bd2292b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-907610 -n no-preload-907610
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-907610 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-907610 logs -n 25: (1.269511654s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-864099 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ ssh     │ -p cilium-864099 sudo crio config                                                                                                                                                                                                             │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │                     │
	│ delete  │ -p cilium-864099                                                                                                                                                                                                                              │ cilium-864099             │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:29 UTC │
	│ start   │ -p force-systemd-env-683299 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-683299  │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p kubernetes-upgrade-480353                                                                                                                                                                                                                  │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-845026    │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-683299                                                                                                                                                                                                                   │ force-systemd-env-683299  │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p cert-options-115480 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-115480 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-115480 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-115480                                                                                                                                                                                                                        │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-448285 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-448285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ image   │ old-k8s-version-448285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-448285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-845026    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610         │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-845026                                                                                                                                                                                                                     │ cert-expiration-845026    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596        │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-907610         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:33:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:33:58.563119  704900 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:33:58.563653  704900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:33:58.563687  704900 out.go:374] Setting ErrFile to fd 2...
	I1115 10:33:58.563708  704900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:33:58.563992  704900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:33:58.564470  704900 out.go:368] Setting JSON to false
	I1115 10:33:58.565410  704900 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18990,"bootTime":1763183849,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:33:58.565503  704900 start.go:143] virtualization:  
	I1115 10:33:58.569248  704900 out.go:179] * [embed-certs-531596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:33:58.573821  704900 notify.go:221] Checking for updates...
	I1115 10:33:58.574668  704900 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:33:58.578048  704900 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:33:58.581278  704900 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:33:58.584455  704900 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:33:58.587640  704900 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:33:58.592312  704900 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:33:58.596133  704900 config.go:182] Loaded profile config "no-preload-907610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:33:58.596302  704900 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:33:58.659220  704900 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:33:58.659343  704900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:33:58.744206  704900 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-15 10:33:58.734659468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:33:58.744313  704900 docker.go:319] overlay module found
	I1115 10:33:58.747669  704900 out.go:179] * Using the docker driver based on user configuration
	I1115 10:33:58.750652  704900 start.go:309] selected driver: docker
	I1115 10:33:58.750687  704900 start.go:930] validating driver "docker" against <nil>
	I1115 10:33:58.750705  704900 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:33:58.751385  704900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:33:58.837692  704900 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-11-15 10:33:58.82820641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:33:58.837851  704900 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:33:58.838075  704900 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:33:58.841157  704900 out.go:179] * Using Docker driver with root privileges
	I1115 10:33:58.844087  704900 cni.go:84] Creating CNI manager for ""
	I1115 10:33:58.844151  704900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:33:58.844160  704900 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:33:58.844258  704900 start.go:353] cluster config:
	{Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:33:58.847383  704900 out.go:179] * Starting "embed-certs-531596" primary control-plane node in "embed-certs-531596" cluster
	I1115 10:33:58.850195  704900 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:33:58.853110  704900 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:33:58.856109  704900 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:33:58.856153  704900 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:33:58.856164  704900 cache.go:65] Caching tarball of preloaded images
	I1115 10:33:58.856278  704900 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:33:58.856288  704900 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:33:58.856399  704900 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/config.json ...
	I1115 10:33:58.856418  704900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/config.json: {Name:mk44738008bfc819bec8159ed414b3a4bad9e836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:33:58.856561  704900 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:33:58.874157  704900 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:33:58.874175  704900 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:33:58.874190  704900 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:33:58.874214  704900 start.go:360] acquireMachinesLock for embed-certs-531596: {Name:mk92715fcdfed9f5936819aaa5d8bdc4948b9228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:33:58.874311  704900 start.go:364] duration metric: took 81.738µs to acquireMachinesLock for "embed-certs-531596"
	I1115 10:33:58.874335  704900 start.go:93] Provisioning new machine with config: &{Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:33:58.874400  704900 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:33:56.430393  701171 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:33:56.704597  701171 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:33:57.298423  701171 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:33:59.530326  701171 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:34:00.233182  701171 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:34:00.233957  701171 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-907610] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:34:01.079987  701171 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:34:01.080561  701171 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-907610] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:33:58.877799  704900 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:33:58.878054  704900 start.go:159] libmachine.API.Create for "embed-certs-531596" (driver="docker")
	I1115 10:33:58.878083  704900 client.go:173] LocalClient.Create starting
	I1115 10:33:58.878139  704900 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem
	I1115 10:33:58.878177  704900 main.go:143] libmachine: Decoding PEM data...
	I1115 10:33:58.878190  704900 main.go:143] libmachine: Parsing certificate...
	I1115 10:33:58.878240  704900 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem
	I1115 10:33:58.878257  704900 main.go:143] libmachine: Decoding PEM data...
	I1115 10:33:58.878266  704900 main.go:143] libmachine: Parsing certificate...
	I1115 10:33:58.878596  704900 cli_runner.go:164] Run: docker network inspect embed-certs-531596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:33:58.911787  704900 cli_runner.go:211] docker network inspect embed-certs-531596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:33:58.911866  704900 network_create.go:284] running [docker network inspect embed-certs-531596] to gather additional debugging logs...
	I1115 10:33:58.911890  704900 cli_runner.go:164] Run: docker network inspect embed-certs-531596
	W1115 10:33:58.931168  704900 cli_runner.go:211] docker network inspect embed-certs-531596 returned with exit code 1
	I1115 10:33:58.931196  704900 network_create.go:287] error running [docker network inspect embed-certs-531596]: docker network inspect embed-certs-531596: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-531596 not found
	I1115 10:33:58.931225  704900 network_create.go:289] output of [docker network inspect embed-certs-531596]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-531596 not found
	
	** /stderr **
	I1115 10:33:58.931324  704900 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:33:58.947664  704900 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-03fcaf6cb6bf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:21:e0:cf:fc:c1} reservation:<nil>}
	I1115 10:33:58.948045  704900 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a5248bd30780 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:00:a1:23:de:dd} reservation:<nil>}
	I1115 10:33:58.948414  704900 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae071823fd3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b9:7d:07:12:bf} reservation:<nil>}
	I1115 10:33:58.948839  704900 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195f810}
	I1115 10:33:58.948857  704900 network_create.go:124] attempt to create docker network embed-certs-531596 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 10:33:58.948914  704900 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-531596 embed-certs-531596
	I1115 10:33:59.019057  704900 network_create.go:108] docker network embed-certs-531596 192.168.76.0/24 created
	I1115 10:33:59.019086  704900 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-531596" container
	I1115 10:33:59.019173  704900 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:33:59.043244  704900 cli_runner.go:164] Run: docker volume create embed-certs-531596 --label name.minikube.sigs.k8s.io=embed-certs-531596 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:33:59.060794  704900 oci.go:103] Successfully created a docker volume embed-certs-531596
	I1115 10:33:59.060885  704900 cli_runner.go:164] Run: docker run --rm --name embed-certs-531596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-531596 --entrypoint /usr/bin/test -v embed-certs-531596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:33:59.672602  704900 oci.go:107] Successfully prepared a docker volume embed-certs-531596
	I1115 10:33:59.672687  704900 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:33:59.672697  704900 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:33:59.672759  704900 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-531596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:34:01.692612  701171 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:34:02.224996  701171 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:34:02.559427  701171 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:34:02.559961  701171 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:34:02.765717  701171 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:34:02.852264  701171 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:34:03.400790  701171 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:34:03.619055  701171 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:34:03.924114  701171 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:34:03.924847  701171 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:34:03.927483  701171 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:34:03.970085  701171 out.go:252]   - Booting up control plane ...
	I1115 10:34:03.970199  701171 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:34:03.970281  701171 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:34:03.970352  701171 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:34:03.974265  701171 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:34:03.974544  701171 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:34:03.982084  701171 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:34:03.982359  701171 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:34:03.982586  701171 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:34:04.117127  701171 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:34:04.117257  701171 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:34:04.520109  704900 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-531596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.847315009s)
	I1115 10:34:04.520137  704900 kic.go:203] duration metric: took 4.847436352s to extract preloaded images to volume ...
	W1115 10:34:04.520277  704900 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:34:04.520390  704900 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:34:04.594795  704900 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-531596 --name embed-certs-531596 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-531596 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-531596 --network embed-certs-531596 --ip 192.168.76.2 --volume embed-certs-531596:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:34:04.945526  704900 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Running}}
	I1115 10:34:04.966855  704900 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:34:04.988490  704900 cli_runner.go:164] Run: docker exec embed-certs-531596 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:34:05.063872  704900 oci.go:144] the created container "embed-certs-531596" has a running status.
	I1115 10:34:05.063900  704900 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa...
	I1115 10:34:05.907200  704900 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:34:05.936016  704900 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:34:05.956986  704900 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:34:05.957004  704900 kic_runner.go:114] Args: [docker exec --privileged embed-certs-531596 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:34:06.025739  704900 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:34:06.063185  704900 machine.go:94] provisionDockerMachine start ...
	I1115 10:34:06.063275  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:06.087048  704900 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:06.087381  704900 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 10:34:06.087391  704900 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:34:06.089740  704900 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:34:07.117976  701171 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 3.001704885s
	I1115 10:34:07.121948  701171 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:34:07.122270  701171 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1115 10:34:07.122386  701171 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:34:07.122488  701171 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:34:09.289251  704900 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-531596
	
	I1115 10:34:09.289279  704900 ubuntu.go:182] provisioning hostname "embed-certs-531596"
	I1115 10:34:09.289355  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:09.319774  704900 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:09.320087  704900 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 10:34:09.320111  704900 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-531596 && echo "embed-certs-531596" | sudo tee /etc/hostname
	I1115 10:34:09.507365  704900 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-531596
	
	I1115 10:34:09.507482  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:09.531579  704900 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:09.531886  704900 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 10:34:09.531902  704900 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-531596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-531596/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-531596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:34:09.718044  704900 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:34:09.718067  704900 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:34:09.718091  704900 ubuntu.go:190] setting up certificates
	I1115 10:34:09.718101  704900 provision.go:84] configureAuth start
	I1115 10:34:09.718157  704900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-531596
	I1115 10:34:09.748247  704900 provision.go:143] copyHostCerts
	I1115 10:34:09.748315  704900 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:34:09.748324  704900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:34:09.748398  704900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:34:09.748485  704900 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:34:09.748489  704900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:34:09.748513  704900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:34:09.748567  704900 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:34:09.748572  704900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:34:09.748594  704900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:34:09.748638  704900 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.embed-certs-531596 san=[127.0.0.1 192.168.76.2 embed-certs-531596 localhost minikube]
	I1115 10:34:10.113014  704900 provision.go:177] copyRemoteCerts
	I1115 10:34:10.113140  704900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:34:10.113224  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:10.150300  704900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:34:10.263615  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:34:10.296340  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:34:10.323211  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:34:10.350191  704900 provision.go:87] duration metric: took 632.076742ms to configureAuth
	I1115 10:34:10.350266  704900 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:34:10.350504  704900 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:10.350670  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:10.382234  704900 main.go:143] libmachine: Using SSH client type: native
	I1115 10:34:10.382590  704900 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33794 <nil> <nil>}
	I1115 10:34:10.382604  704900 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:34:10.764063  704900 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:34:10.764092  704900 machine.go:97] duration metric: took 4.700885174s to provisionDockerMachine
	I1115 10:34:10.764103  704900 client.go:176] duration metric: took 11.88601396s to LocalClient.Create
	I1115 10:34:10.764114  704900 start.go:167] duration metric: took 11.886060202s to libmachine.API.Create "embed-certs-531596"
	I1115 10:34:10.764121  704900 start.go:293] postStartSetup for "embed-certs-531596" (driver="docker")
	I1115 10:34:10.764130  704900 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:34:10.764211  704900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:34:10.764253  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:10.794269  704900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:34:10.916117  704900 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:34:10.919887  704900 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:34:10.919919  704900 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:34:10.919932  704900 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:34:10.919990  704900 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:34:10.920080  704900 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:34:10.920201  704900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:34:10.929736  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:34:10.955406  704900 start.go:296] duration metric: took 191.270512ms for postStartSetup
	I1115 10:34:10.955773  704900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-531596
	I1115 10:34:10.985817  704900 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/config.json ...
	I1115 10:34:10.986094  704900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:34:10.986135  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:11.022394  704900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:34:11.150048  704900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:34:11.155095  704900 start.go:128] duration metric: took 12.28067747s to createHost
	I1115 10:34:11.155121  704900 start.go:83] releasing machines lock for "embed-certs-531596", held for 12.280801635s
	I1115 10:34:11.155202  704900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-531596
	I1115 10:34:11.173965  704900 ssh_runner.go:195] Run: cat /version.json
	I1115 10:34:11.174032  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:11.174301  704900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:34:11.174352  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:11.218987  704900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:34:11.221173  704900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:34:11.456051  704900 ssh_runner.go:195] Run: systemctl --version
	I1115 10:34:11.466456  704900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:34:11.546918  704900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:34:11.554717  704900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:34:11.554841  704900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:34:11.590486  704900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:34:11.590516  704900 start.go:496] detecting cgroup driver to use...
	I1115 10:34:11.590549  704900 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:34:11.590603  704900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:34:11.616147  704900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:34:11.636411  704900 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:34:11.636559  704900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:34:11.668455  704900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:34:11.697460  704900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:34:11.922100  704900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:34:12.143014  704900 docker.go:234] disabling docker service ...
	I1115 10:34:12.143085  704900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:34:12.192014  704900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:34:12.222129  704900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:34:12.465831  704900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:34:12.710970  704900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:34:12.747352  704900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:34:12.779071  704900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:34:12.779188  704900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:12.790290  704900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:34:12.790408  704900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:12.807901  704900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:12.818910  704900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:12.830673  704900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:34:12.843078  704900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:12.857611  704900 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:12.875554  704900 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:34:12.887278  704900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:34:12.897828  704900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:34:12.906489  704900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:34:13.101562  704900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:34:13.285826  704900 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:34:13.285949  704900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:34:13.292449  704900 start.go:564] Will wait 60s for crictl version
	I1115 10:34:13.292561  704900 ssh_runner.go:195] Run: which crictl
	I1115 10:34:13.301241  704900 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:34:13.349852  704900 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:34:13.350033  704900 ssh_runner.go:195] Run: crio --version
	I1115 10:34:13.390524  704900 ssh_runner.go:195] Run: crio --version
	I1115 10:34:13.432776  704900 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:34:13.435790  704900 cli_runner.go:164] Run: docker network inspect embed-certs-531596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:34:13.452722  704900 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:34:13.458276  704900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:34:13.468088  704900 kubeadm.go:884] updating cluster {Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:34:13.468224  704900 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:34:13.468279  704900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:34:13.512211  704900 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:34:13.512235  704900 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:34:13.512291  704900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:34:13.547525  704900 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:34:13.547548  704900 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:34:13.547556  704900 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:34:13.547643  704900 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-531596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:34:13.547764  704900 ssh_runner.go:195] Run: crio config
	I1115 10:34:11.319875  701171 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.197458691s
	I1115 10:34:13.236696  701171 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.114673814s
	I1115 10:34:15.124675  701171 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002602255s
	I1115 10:34:15.150991  701171 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:34:15.169945  701171 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:34:15.186503  701171 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:34:15.186712  701171 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-907610 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:34:15.211163  701171 kubeadm.go:319] [bootstrap-token] Using token: 5ucvd5.bzv7gjjwtjkjehax
	I1115 10:34:15.214138  701171 out.go:252]   - Configuring RBAC rules ...
	I1115 10:34:15.214268  701171 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:34:15.229936  701171 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:34:15.241951  701171 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:34:15.242096  701171 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:34:15.249945  701171 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:34:15.251869  701171 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:34:15.533330  701171 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:34:16.081563  701171 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:34:16.534159  701171 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:34:16.535284  701171 kubeadm.go:319] 
	I1115 10:34:16.535363  701171 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:34:16.535370  701171 kubeadm.go:319] 
	I1115 10:34:16.535451  701171 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:34:16.535456  701171 kubeadm.go:319] 
	I1115 10:34:16.535482  701171 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:34:16.535682  701171 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:34:16.535745  701171 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:34:16.535751  701171 kubeadm.go:319] 
	I1115 10:34:16.535808  701171 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:34:16.535813  701171 kubeadm.go:319] 
	I1115 10:34:16.535878  701171 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:34:16.535883  701171 kubeadm.go:319] 
	I1115 10:34:16.535937  701171 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:34:16.536024  701171 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:34:16.536096  701171 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:34:16.536101  701171 kubeadm.go:319] 
	I1115 10:34:16.536197  701171 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:34:16.536278  701171 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:34:16.536282  701171 kubeadm.go:319] 
	I1115 10:34:16.536370  701171 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5ucvd5.bzv7gjjwtjkjehax \
	I1115 10:34:16.536478  701171 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 10:34:16.536500  701171 kubeadm.go:319] 	--control-plane 
	I1115 10:34:16.536504  701171 kubeadm.go:319] 
	I1115 10:34:16.536593  701171 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:34:16.536597  701171 kubeadm.go:319] 
	I1115 10:34:16.536799  701171 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5ucvd5.bzv7gjjwtjkjehax \
	I1115 10:34:16.536916  701171 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 10:34:16.545939  701171 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:34:16.546174  701171 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:34:16.546284  701171 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:34:16.546299  701171 cni.go:84] Creating CNI manager for ""
	I1115 10:34:16.546306  701171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:34:16.549737  701171 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:34:13.615979  704900 cni.go:84] Creating CNI manager for ""
	I1115 10:34:13.616004  704900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:34:13.616021  704900 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:34:13.616047  704900 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-531596 NodeName:embed-certs-531596 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:34:13.616189  704900 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-531596"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:34:13.616284  704900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:34:13.630079  704900 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:34:13.630158  704900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:34:13.642673  704900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:34:13.659820  704900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:34:13.676104  704900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:34:13.690185  704900 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:34:13.694147  704900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:34:13.704481  704900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:34:13.855271  704900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:34:13.879579  704900 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596 for IP: 192.168.76.2
	I1115 10:34:13.879601  704900 certs.go:195] generating shared ca certs ...
	I1115 10:34:13.879617  704900 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:13.879757  704900 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:34:13.879808  704900 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:34:13.879827  704900 certs.go:257] generating profile certs ...
	I1115 10:34:13.879882  704900 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/client.key
	I1115 10:34:13.879899  704900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/client.crt with IP's: []
	I1115 10:34:14.902427  704900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/client.crt ...
	I1115 10:34:14.902456  704900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/client.crt: {Name:mk6130adc8c572049107cee26c55a2c0806f046a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:14.902680  704900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/client.key ...
	I1115 10:34:14.902698  704900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/client.key: {Name:mk7d04a3c6557569f953fe5921ff1fc0cc7db204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:14.902807  704900 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key.8b8c468c
	I1115 10:34:14.902827  704900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.crt.8b8c468c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 10:34:15.410566  704900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.crt.8b8c468c ...
	I1115 10:34:15.410597  704900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.crt.8b8c468c: {Name:mk81b088f7b9282ab002ab589be2c81857676e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:15.410801  704900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key.8b8c468c ...
	I1115 10:34:15.410815  704900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key.8b8c468c: {Name:mk23e784e4fb3f018970f9eb0afaaffe14da8cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:15.410903  704900 certs.go:382] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.crt.8b8c468c -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.crt
	I1115 10:34:15.410990  704900 certs.go:386] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key.8b8c468c -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key
	I1115 10:34:15.411052  704900 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.key
	I1115 10:34:15.411070  704900 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.crt with IP's: []
	I1115 10:34:15.686733  704900 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.crt ...
	I1115 10:34:15.686769  704900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.crt: {Name:mkdae84e3b3111ac18fa2f1fa86304c8cf081d6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:15.686952  704900 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.key ...
	I1115 10:34:15.686967  704900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.key: {Name:mk7bb3d781667b3945e5df75650ddd7e8c62471d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:15.687171  704900 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:34:15.687215  704900 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:34:15.687232  704900 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:34:15.687257  704900 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:34:15.687289  704900 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:34:15.687315  704900 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:34:15.687361  704900 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:34:15.687942  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:34:15.708295  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:34:15.724699  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:34:15.740898  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:34:15.756929  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:34:15.773447  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:34:15.790616  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:34:15.807053  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:34:15.827274  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:34:15.860339  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:34:15.884232  704900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:34:15.905405  704900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:34:15.925245  704900 ssh_runner.go:195] Run: openssl version
	I1115 10:34:15.932345  704900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:34:15.941482  704900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:34:15.945962  704900 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:34:15.946029  704900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:34:15.993431  704900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:34:16.002441  704900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:34:16.012455  704900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:34:16.017109  704900 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:34:16.017181  704900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:34:16.060277  704900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:34:16.069088  704900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:34:16.078847  704900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:34:16.084239  704900 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:34:16.084348  704900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:34:16.127228  704900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:34:16.135777  704900 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:34:16.140285  704900 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:34:16.140384  704900 kubeadm.go:401] StartCluster: {Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:34:16.140496  704900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:34:16.140592  704900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:34:16.177071  704900 cri.go:89] found id: ""
	I1115 10:34:16.177191  704900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:34:16.190200  704900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:34:16.202800  704900 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:34:16.202905  704900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:34:16.215937  704900 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:34:16.216004  704900 kubeadm.go:158] found existing configuration files:
	
	I1115 10:34:16.216069  704900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:34:16.229959  704900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:34:16.230064  704900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:34:16.242300  704900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:34:16.254260  704900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:34:16.254378  704900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:34:16.266628  704900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:34:16.278861  704900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:34:16.278978  704900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:34:16.293138  704900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:34:16.309519  704900 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:34:16.309658  704900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:34:16.317974  704900 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:34:16.426897  704900 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:34:16.430360  704900 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:34:16.466469  704900 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:34:16.466616  704900 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:34:16.466710  704900 kubeadm.go:319] OS: Linux
	I1115 10:34:16.466790  704900 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:34:16.466859  704900 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:34:16.466944  704900 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:34:16.467011  704900 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:34:16.467124  704900 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:34:16.467191  704900 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:34:16.467244  704900 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:34:16.467299  704900 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:34:16.467352  704900 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:34:16.576818  704900 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:34:16.576939  704900 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:34:16.577048  704900 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:34:16.594054  704900 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:34:16.599369  704900 out.go:252]   - Generating certificates and keys ...
	I1115 10:34:16.599465  704900 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:34:16.599534  704900 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:34:17.246498  704900 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:34:17.347135  704900 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:34:17.736652  704900 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:34:16.552718  701171 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:34:16.570253  701171 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:34:16.570270  701171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:34:16.594893  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:34:17.035308  701171 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:34:17.035434  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:17.035515  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-907610 minikube.k8s.io/updated_at=2025_11_15T10_34_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=no-preload-907610 minikube.k8s.io/primary=true
	I1115 10:34:17.297449  701171 ops.go:34] apiserver oom_adj: -16
	I1115 10:34:17.297642  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:17.797742  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:18.297733  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:18.797732  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:19.297972  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:19.797750  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:20.297949  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:20.798660  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:21.298425  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:21.798208  701171 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:22.005182  701171 kubeadm.go:1114] duration metric: took 4.969790592s to wait for elevateKubeSystemPrivileges
	I1115 10:34:22.005210  701171 kubeadm.go:403] duration metric: took 26.470075326s to StartCluster
	I1115 10:34:22.005230  701171 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:22.005305  701171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:34:22.006062  701171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:22.006307  701171 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:34:22.006431  701171 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:34:22.006706  701171 config.go:182] Loaded profile config "no-preload-907610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:22.006744  701171 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:34:22.006814  701171 addons.go:70] Setting storage-provisioner=true in profile "no-preload-907610"
	I1115 10:34:22.006829  701171 addons.go:239] Setting addon storage-provisioner=true in "no-preload-907610"
	I1115 10:34:22.006854  701171 host.go:66] Checking if "no-preload-907610" exists ...
	I1115 10:34:22.007369  701171 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:34:22.007888  701171 addons.go:70] Setting default-storageclass=true in profile "no-preload-907610"
	I1115 10:34:22.007932  701171 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-907610"
	I1115 10:34:22.008233  701171 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:34:22.010006  701171 out.go:179] * Verifying Kubernetes components...
	I1115 10:34:22.013386  701171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:34:22.043615  701171 addons.go:239] Setting addon default-storageclass=true in "no-preload-907610"
	I1115 10:34:22.043655  701171 host.go:66] Checking if "no-preload-907610" exists ...
	I1115 10:34:22.044084  701171 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:34:22.056120  701171 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:34:18.782791  704900 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:34:19.031392  704900 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:34:19.031823  704900 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-531596 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:34:19.806970  704900 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:34:19.807465  704900 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-531596 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:34:21.107954  704900 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:34:21.606842  704900 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:34:22.956565  704900 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:34:22.956967  704900 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:34:23.278913  704900 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:34:22.060728  701171 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:34:22.060757  701171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:34:22.060827  701171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:34:22.082526  701171 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:34:22.082547  701171 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:34:22.082610  701171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:34:22.109783  701171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:34:22.127488  701171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33789 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:34:22.637483  701171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:34:22.688324  701171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:34:22.690377  701171 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:34:22.783258  701171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:34:24.560528  701171 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.872122546s)
	I1115 10:34:24.560619  701171 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.870183186s)
	I1115 10:34:24.560768  701171 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 10:34:24.560644  701171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.777327673s)
	I1115 10:34:24.563324  701171 node_ready.go:35] waiting up to 6m0s for node "no-preload-907610" to be "Ready" ...
	I1115 10:34:24.564970  701171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.9274198s)
	I1115 10:34:24.682729  701171 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:34:23.797842  704900 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:34:24.647106  704900 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:34:25.435719  704900 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:34:25.743364  704900 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:34:25.744119  704900 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:34:25.746884  704900 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:34:24.685708  701171 addons.go:515] duration metric: took 2.678947057s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:34:25.070850  701171 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-907610" context rescaled to 1 replicas
	I1115 10:34:25.750801  704900 out.go:252]   - Booting up control plane ...
	I1115 10:34:25.750928  704900 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:34:25.751011  704900 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:34:25.751095  704900 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:34:25.768870  704900 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:34:25.768988  704900 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:34:25.777499  704900 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:34:25.777624  704900 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:34:25.777673  704900 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:34:25.953966  704900 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:34:25.954095  704900 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:34:27.457116  704900 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500748407s
	I1115 10:34:27.461451  704900 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:34:27.461551  704900 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 10:34:27.462878  704900 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:34:27.462990  704900 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1115 10:34:26.566134  701171 node_ready.go:57] node "no-preload-907610" has "Ready":"False" status (will retry)
	W1115 10:34:28.567524  701171 node_ready.go:57] node "no-preload-907610" has "Ready":"False" status (will retry)
	W1115 10:34:31.066669  701171 node_ready.go:57] node "no-preload-907610" has "Ready":"False" status (will retry)
	I1115 10:34:31.239009  704900 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.774559159s
	I1115 10:34:33.363317  704900 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.898618886s
	I1115 10:34:35.466036  704900 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002053934s
	I1115 10:34:35.489490  704900 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:34:35.503838  704900 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:34:35.522256  704900 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:34:35.522736  704900 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-531596 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:34:35.539404  704900 kubeadm.go:319] [bootstrap-token] Using token: xs3vg8.jq4uvxwyv4aszjul
	W1115 10:34:33.067045  701171 node_ready.go:57] node "no-preload-907610" has "Ready":"False" status (will retry)
	W1115 10:34:35.067559  701171 node_ready.go:57] node "no-preload-907610" has "Ready":"False" status (will retry)
	I1115 10:34:35.542224  704900 out.go:252]   - Configuring RBAC rules ...
	I1115 10:34:35.542366  704900 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:34:35.547015  704900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:34:35.558400  704900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:34:35.563199  704900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:34:35.569777  704900 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:34:35.574103  704900 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:34:35.872771  704900 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:34:36.330072  704900 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:34:36.876282  704900 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:34:36.877624  704900 kubeadm.go:319] 
	I1115 10:34:36.877723  704900 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:34:36.877738  704900 kubeadm.go:319] 
	I1115 10:34:36.877826  704900 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:34:36.877832  704900 kubeadm.go:319] 
	I1115 10:34:36.877858  704900 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:34:36.877926  704900 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:34:36.877979  704900 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:34:36.877983  704900 kubeadm.go:319] 
	I1115 10:34:36.878039  704900 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:34:36.878044  704900 kubeadm.go:319] 
	I1115 10:34:36.878094  704900 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:34:36.878100  704900 kubeadm.go:319] 
	I1115 10:34:36.878154  704900 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:34:36.878233  704900 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:34:36.878304  704900 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:34:36.878309  704900 kubeadm.go:319] 
	I1115 10:34:36.878396  704900 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:34:36.878476  704900 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:34:36.878481  704900 kubeadm.go:319] 
	I1115 10:34:36.878568  704900 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xs3vg8.jq4uvxwyv4aszjul \
	I1115 10:34:36.878676  704900 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 10:34:36.878698  704900 kubeadm.go:319] 	--control-plane 
	I1115 10:34:36.878702  704900 kubeadm.go:319] 
	I1115 10:34:36.878790  704900 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:34:36.878795  704900 kubeadm.go:319] 
	I1115 10:34:36.878880  704900 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xs3vg8.jq4uvxwyv4aszjul \
	I1115 10:34:36.878986  704900 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 10:34:36.883130  704900 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:34:36.883372  704900 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:34:36.883485  704900 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:34:36.883510  704900 cni.go:84] Creating CNI manager for ""
	I1115 10:34:36.883523  704900 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:34:36.888471  704900 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:34:36.891405  704900 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:34:36.895807  704900 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:34:36.895828  704900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:34:36.915960  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:34:37.431368  704900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:34:37.431512  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:37.431592  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-531596 minikube.k8s.io/updated_at=2025_11_15T10_34_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=embed-certs-531596 minikube.k8s.io/primary=true
	I1115 10:34:37.664751  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:37.664613  704900 ops.go:34] apiserver oom_adj: -16
	I1115 10:34:38.165286  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1115 10:34:37.567247  701171 node_ready.go:57] node "no-preload-907610" has "Ready":"False" status (will retry)
	I1115 10:34:38.574833  701171 node_ready.go:49] node "no-preload-907610" is "Ready"
	I1115 10:34:38.574857  701171 node_ready.go:38] duration metric: took 14.011465509s for node "no-preload-907610" to be "Ready" ...
	I1115 10:34:38.574871  701171 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:34:38.574930  701171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:34:38.589302  701171 api_server.go:72] duration metric: took 16.582963194s to wait for apiserver process to appear ...
	I1115 10:34:38.589326  701171 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:34:38.589346  701171 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:34:38.599828  701171 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 10:34:38.600958  701171 api_server.go:141] control plane version: v1.34.1
	I1115 10:34:38.600981  701171 api_server.go:131] duration metric: took 11.646774ms to wait for apiserver health ...
	I1115 10:34:38.600990  701171 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:34:38.605385  701171 system_pods.go:59] 8 kube-system pods found
	I1115 10:34:38.605415  701171 system_pods.go:61] "coredns-66bc5c9577-ql8g6" [ce1fc969-663d-4f7e-87db-2b3bf3b6ee52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:38.605422  701171 system_pods.go:61] "etcd-no-preload-907610" [bdb23ef6-7c48-4d11-87cf-8671dbc10308] Running
	I1115 10:34:38.605427  701171 system_pods.go:61] "kindnet-kgnjv" [421d25ba-102f-4638-b4ea-1a99bb7ceab5] Running
	I1115 10:34:38.605432  701171 system_pods.go:61] "kube-apiserver-no-preload-907610" [d60fa2cd-faf5-45af-b8b5-03c138a84759] Running
	I1115 10:34:38.605437  701171 system_pods.go:61] "kube-controller-manager-no-preload-907610" [d2a703b6-8242-441a-9fb7-c4c20606e79d] Running
	I1115 10:34:38.605453  701171 system_pods.go:61] "kube-proxy-rh8h4" [353b68d4-24ef-47ff-9420-6dfd96c66c24] Running
	I1115 10:34:38.605458  701171 system_pods.go:61] "kube-scheduler-no-preload-907610" [ea7c79c6-17f9-4a30-9dc9-43c8a47a8fb5] Running
	I1115 10:34:38.605464  701171 system_pods.go:61] "storage-provisioner" [217253e2-283b-49d2-8a84-30111b378edd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:34:38.605471  701171 system_pods.go:74] duration metric: took 4.475084ms to wait for pod list to return data ...
	I1115 10:34:38.605482  701171 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:34:38.609118  701171 default_sa.go:45] found service account: "default"
	I1115 10:34:38.609143  701171 default_sa.go:55] duration metric: took 3.653437ms for default service account to be created ...
	I1115 10:34:38.609153  701171 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:34:38.617973  701171 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:38.618008  701171 system_pods.go:89] "coredns-66bc5c9577-ql8g6" [ce1fc969-663d-4f7e-87db-2b3bf3b6ee52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:38.618015  701171 system_pods.go:89] "etcd-no-preload-907610" [bdb23ef6-7c48-4d11-87cf-8671dbc10308] Running
	I1115 10:34:38.618021  701171 system_pods.go:89] "kindnet-kgnjv" [421d25ba-102f-4638-b4ea-1a99bb7ceab5] Running
	I1115 10:34:38.618025  701171 system_pods.go:89] "kube-apiserver-no-preload-907610" [d60fa2cd-faf5-45af-b8b5-03c138a84759] Running
	I1115 10:34:38.618030  701171 system_pods.go:89] "kube-controller-manager-no-preload-907610" [d2a703b6-8242-441a-9fb7-c4c20606e79d] Running
	I1115 10:34:38.618035  701171 system_pods.go:89] "kube-proxy-rh8h4" [353b68d4-24ef-47ff-9420-6dfd96c66c24] Running
	I1115 10:34:38.618040  701171 system_pods.go:89] "kube-scheduler-no-preload-907610" [ea7c79c6-17f9-4a30-9dc9-43c8a47a8fb5] Running
	I1115 10:34:38.618046  701171 system_pods.go:89] "storage-provisioner" [217253e2-283b-49d2-8a84-30111b378edd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:34:38.618076  701171 retry.go:31] will retry after 220.026417ms: missing components: kube-dns
	I1115 10:34:38.841237  701171 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:38.841273  701171 system_pods.go:89] "coredns-66bc5c9577-ql8g6" [ce1fc969-663d-4f7e-87db-2b3bf3b6ee52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:38.841280  701171 system_pods.go:89] "etcd-no-preload-907610" [bdb23ef6-7c48-4d11-87cf-8671dbc10308] Running
	I1115 10:34:38.841286  701171 system_pods.go:89] "kindnet-kgnjv" [421d25ba-102f-4638-b4ea-1a99bb7ceab5] Running
	I1115 10:34:38.841292  701171 system_pods.go:89] "kube-apiserver-no-preload-907610" [d60fa2cd-faf5-45af-b8b5-03c138a84759] Running
	I1115 10:34:38.841297  701171 system_pods.go:89] "kube-controller-manager-no-preload-907610" [d2a703b6-8242-441a-9fb7-c4c20606e79d] Running
	I1115 10:34:38.841300  701171 system_pods.go:89] "kube-proxy-rh8h4" [353b68d4-24ef-47ff-9420-6dfd96c66c24] Running
	I1115 10:34:38.841304  701171 system_pods.go:89] "kube-scheduler-no-preload-907610" [ea7c79c6-17f9-4a30-9dc9-43c8a47a8fb5] Running
	I1115 10:34:38.841310  701171 system_pods.go:89] "storage-provisioner" [217253e2-283b-49d2-8a84-30111b378edd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:34:38.841326  701171 retry.go:31] will retry after 258.813393ms: missing components: kube-dns
	I1115 10:34:39.107050  701171 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:39.107087  701171 system_pods.go:89] "coredns-66bc5c9577-ql8g6" [ce1fc969-663d-4f7e-87db-2b3bf3b6ee52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:39.107095  701171 system_pods.go:89] "etcd-no-preload-907610" [bdb23ef6-7c48-4d11-87cf-8671dbc10308] Running
	I1115 10:34:39.107102  701171 system_pods.go:89] "kindnet-kgnjv" [421d25ba-102f-4638-b4ea-1a99bb7ceab5] Running
	I1115 10:34:39.107107  701171 system_pods.go:89] "kube-apiserver-no-preload-907610" [d60fa2cd-faf5-45af-b8b5-03c138a84759] Running
	I1115 10:34:39.107111  701171 system_pods.go:89] "kube-controller-manager-no-preload-907610" [d2a703b6-8242-441a-9fb7-c4c20606e79d] Running
	I1115 10:34:39.107116  701171 system_pods.go:89] "kube-proxy-rh8h4" [353b68d4-24ef-47ff-9420-6dfd96c66c24] Running
	I1115 10:34:39.107120  701171 system_pods.go:89] "kube-scheduler-no-preload-907610" [ea7c79c6-17f9-4a30-9dc9-43c8a47a8fb5] Running
	I1115 10:34:39.107127  701171 system_pods.go:89] "storage-provisioner" [217253e2-283b-49d2-8a84-30111b378edd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:34:39.107147  701171 retry.go:31] will retry after 352.808767ms: missing components: kube-dns
	I1115 10:34:39.464210  701171 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:39.464245  701171 system_pods.go:89] "coredns-66bc5c9577-ql8g6" [ce1fc969-663d-4f7e-87db-2b3bf3b6ee52] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:34:39.464252  701171 system_pods.go:89] "etcd-no-preload-907610" [bdb23ef6-7c48-4d11-87cf-8671dbc10308] Running
	I1115 10:34:39.464259  701171 system_pods.go:89] "kindnet-kgnjv" [421d25ba-102f-4638-b4ea-1a99bb7ceab5] Running
	I1115 10:34:39.464264  701171 system_pods.go:89] "kube-apiserver-no-preload-907610" [d60fa2cd-faf5-45af-b8b5-03c138a84759] Running
	I1115 10:34:39.464269  701171 system_pods.go:89] "kube-controller-manager-no-preload-907610" [d2a703b6-8242-441a-9fb7-c4c20606e79d] Running
	I1115 10:34:39.464274  701171 system_pods.go:89] "kube-proxy-rh8h4" [353b68d4-24ef-47ff-9420-6dfd96c66c24] Running
	I1115 10:34:39.464278  701171 system_pods.go:89] "kube-scheduler-no-preload-907610" [ea7c79c6-17f9-4a30-9dc9-43c8a47a8fb5] Running
	I1115 10:34:39.464286  701171 system_pods.go:89] "storage-provisioner" [217253e2-283b-49d2-8a84-30111b378edd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:34:39.464309  701171 retry.go:31] will retry after 392.167038ms: missing components: kube-dns
	I1115 10:34:39.860025  701171 system_pods.go:86] 8 kube-system pods found
	I1115 10:34:39.860052  701171 system_pods.go:89] "coredns-66bc5c9577-ql8g6" [ce1fc969-663d-4f7e-87db-2b3bf3b6ee52] Running
	I1115 10:34:39.860058  701171 system_pods.go:89] "etcd-no-preload-907610" [bdb23ef6-7c48-4d11-87cf-8671dbc10308] Running
	I1115 10:34:39.860063  701171 system_pods.go:89] "kindnet-kgnjv" [421d25ba-102f-4638-b4ea-1a99bb7ceab5] Running
	I1115 10:34:39.860067  701171 system_pods.go:89] "kube-apiserver-no-preload-907610" [d60fa2cd-faf5-45af-b8b5-03c138a84759] Running
	I1115 10:34:39.860072  701171 system_pods.go:89] "kube-controller-manager-no-preload-907610" [d2a703b6-8242-441a-9fb7-c4c20606e79d] Running
	I1115 10:34:39.860076  701171 system_pods.go:89] "kube-proxy-rh8h4" [353b68d4-24ef-47ff-9420-6dfd96c66c24] Running
	I1115 10:34:39.860080  701171 system_pods.go:89] "kube-scheduler-no-preload-907610" [ea7c79c6-17f9-4a30-9dc9-43c8a47a8fb5] Running
	I1115 10:34:39.860084  701171 system_pods.go:89] "storage-provisioner" [217253e2-283b-49d2-8a84-30111b378edd] Running
	I1115 10:34:39.860091  701171 system_pods.go:126] duration metric: took 1.250933172s to wait for k8s-apps to be running ...
	I1115 10:34:39.860099  701171 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:34:39.860155  701171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:34:39.875012  701171 system_svc.go:56] duration metric: took 14.902937ms WaitForService to wait for kubelet
	I1115 10:34:39.875039  701171 kubeadm.go:587] duration metric: took 17.868706262s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:34:39.875057  701171 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:34:39.877738  701171 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:34:39.877773  701171 node_conditions.go:123] node cpu capacity is 2
	I1115 10:34:39.877787  701171 node_conditions.go:105] duration metric: took 2.723789ms to run NodePressure ...
	I1115 10:34:39.877799  701171 start.go:242] waiting for startup goroutines ...
	I1115 10:34:39.877806  701171 start.go:247] waiting for cluster config update ...
	I1115 10:34:39.877817  701171 start.go:256] writing updated cluster config ...
	I1115 10:34:39.878125  701171 ssh_runner.go:195] Run: rm -f paused
	I1115 10:34:39.881942  701171 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:39.885178  701171 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ql8g6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:39.889456  701171 pod_ready.go:94] pod "coredns-66bc5c9577-ql8g6" is "Ready"
	I1115 10:34:39.889483  701171 pod_ready.go:86] duration metric: took 4.285649ms for pod "coredns-66bc5c9577-ql8g6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:39.891658  701171 pod_ready.go:83] waiting for pod "etcd-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:39.896067  701171 pod_ready.go:94] pod "etcd-no-preload-907610" is "Ready"
	I1115 10:34:39.896094  701171 pod_ready.go:86] duration metric: took 4.408903ms for pod "etcd-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:39.898241  701171 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:39.902271  701171 pod_ready.go:94] pod "kube-apiserver-no-preload-907610" is "Ready"
	I1115 10:34:39.902297  701171 pod_ready.go:86] duration metric: took 4.031715ms for pod "kube-apiserver-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:39.906556  701171 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:40.290251  701171 pod_ready.go:94] pod "kube-controller-manager-no-preload-907610" is "Ready"
	I1115 10:34:40.290332  701171 pod_ready.go:86] duration metric: took 383.7488ms for pod "kube-controller-manager-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:40.486382  701171 pod_ready.go:83] waiting for pod "kube-proxy-rh8h4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:40.885516  701171 pod_ready.go:94] pod "kube-proxy-rh8h4" is "Ready"
	I1115 10:34:40.885624  701171 pod_ready.go:86] duration metric: took 399.184088ms for pod "kube-proxy-rh8h4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:41.085982  701171 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:38.665549  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:39.164908  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:39.665763  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:40.165191  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:40.665334  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:41.164898  704900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:34:41.277678  704900 kubeadm.go:1114] duration metric: took 3.846208358s to wait for elevateKubeSystemPrivileges
	I1115 10:34:41.277707  704900 kubeadm.go:403] duration metric: took 25.137327734s to StartCluster
	I1115 10:34:41.277730  704900 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:41.277805  704900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:34:41.279173  704900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:34:41.279415  704900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:34:41.279425  704900 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:34:41.279484  704900 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-531596"
	I1115 10:34:41.279407  704900 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:34:41.279668  704900 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:34:41.279699  704900 addons.go:70] Setting default-storageclass=true in profile "embed-certs-531596"
	I1115 10:34:41.279709  704900 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-531596"
	I1115 10:34:41.279497  704900 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-531596"
	I1115 10:34:41.279987  704900 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:34:41.280163  704900 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:34:41.280481  704900 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:34:41.284103  704900 out.go:179] * Verifying Kubernetes components...
	I1115 10:34:41.287063  704900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:34:41.330526  704900 addons.go:239] Setting addon default-storageclass=true in "embed-certs-531596"
	I1115 10:34:41.330583  704900 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:34:41.331063  704900 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:34:41.345901  704900 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:34:41.486698  701171 pod_ready.go:94] pod "kube-scheduler-no-preload-907610" is "Ready"
	I1115 10:34:41.486727  701171 pod_ready.go:86] duration metric: took 400.669814ms for pod "kube-scheduler-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:34:41.486742  701171 pod_ready.go:40] duration metric: took 1.60473214s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:34:41.599643  701171 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:34:41.603689  701171 out.go:179] * Done! kubectl is now configured to use "no-preload-907610" cluster and "default" namespace by default
	I1115 10:34:41.351490  704900 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:34:41.351518  704900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:34:41.351591  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:41.388385  704900 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:34:41.388407  704900 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:34:41.388471  704900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:34:41.413789  704900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:34:41.432300  704900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33794 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:34:41.992605  704900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:34:41.992789  704900 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:34:42.010067  704900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:34:42.021203  704900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:34:42.621456  704900 node_ready.go:35] waiting up to 6m0s for node "embed-certs-531596" to be "Ready" ...
	I1115 10:34:42.621765  704900 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1115 10:34:42.893329  704900 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 10:34:42.896261  704900 addons.go:515] duration metric: took 1.616823666s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 10:34:43.126453  704900 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-531596" context rescaled to 1 replicas
	W1115 10:34:44.625014  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	W1115 10:34:47.124414  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 15 10:34:38 no-preload-907610 crio[842]: time="2025-11-15T10:34:38.976653511Z" level=info msg="Created container b4a8dda4539dc6a5b5c2d6a0abce341951f858cc19dcc8cb836f28b9d249f9f7: kube-system/coredns-66bc5c9577-ql8g6/coredns" id=708ea736-fa6e-4218-8bec-bb65dad34850 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:38 no-preload-907610 crio[842]: time="2025-11-15T10:34:38.978089056Z" level=info msg="Starting container: b4a8dda4539dc6a5b5c2d6a0abce341951f858cc19dcc8cb836f28b9d249f9f7" id=0fd238d7-b3ae-498b-8930-83195334c45d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:34:38 no-preload-907610 crio[842]: time="2025-11-15T10:34:38.984468973Z" level=info msg="Started container" PID=2498 containerID=b4a8dda4539dc6a5b5c2d6a0abce341951f858cc19dcc8cb836f28b9d249f9f7 description=kube-system/coredns-66bc5c9577-ql8g6/coredns id=0fd238d7-b3ae-498b-8930-83195334c45d name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfd3c817da9e5acd86a8acf827718ec509222e216acb44d33e1ce2f5a53224e4
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.192425161Z" level=info msg="Running pod sandbox: default/busybox/POD" id=41907eab-3c3a-4a2c-aa37-4a0b8ac165c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.192524678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.209499213Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6e31675bae77673ac6393f6e9411b6f6dc7a220999ea7e9b8c2babe55e042b12 UID:9f8722f6-c3d5-4376-a8a0-64c12d93558c NetNS:/var/run/netns/682ab0b7-7678-439d-87c0-5069b8617982 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b798}] Aliases:map[]}"
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.209550026Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.229803043Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6e31675bae77673ac6393f6e9411b6f6dc7a220999ea7e9b8c2babe55e042b12 UID:9f8722f6-c3d5-4376-a8a0-64c12d93558c NetNS:/var/run/netns/682ab0b7-7678-439d-87c0-5069b8617982 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b798}] Aliases:map[]}"
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.230224217Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.237463293Z" level=info msg="Ran pod sandbox 6e31675bae77673ac6393f6e9411b6f6dc7a220999ea7e9b8c2babe55e042b12 with infra container: default/busybox/POD" id=41907eab-3c3a-4a2c-aa37-4a0b8ac165c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.238987862Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c58e961f-730f-4485-b2a8-1cfd618d557b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.239167476Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c58e961f-730f-4485-b2a8-1cfd618d557b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.239216672Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c58e961f-730f-4485-b2a8-1cfd618d557b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.249895268Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3f82a8ea-d863-48cf-bd20-366d60b85823 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:34:42 no-preload-907610 crio[842]: time="2025-11-15T10:34:42.251836047Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.344142813Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=3f82a8ea-d863-48cf-bd20-366d60b85823 name=/runtime.v1.ImageService/PullImage
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.344757541Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=001684df-61f7-4ff7-9ac2-d060398cd813 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.346468213Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=470e296d-7c24-4177-9658-35a6b0ab3872 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.352746381Z" level=info msg="Creating container: default/busybox/busybox" id=2ef4a002-5956-422e-a77e-0dc57c64ff2e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.352879539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.357634746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.358145829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.379285123Z" level=info msg="Created container 920aa9e656fb45f5328d602fa2b8dd54ff93ac542bf3418019ff4b06b673b458: default/busybox/busybox" id=2ef4a002-5956-422e-a77e-0dc57c64ff2e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.381054977Z" level=info msg="Starting container: 920aa9e656fb45f5328d602fa2b8dd54ff93ac542bf3418019ff4b06b673b458" id=e7173049-41b1-47d4-ac9c-3ae230502b2b name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:34:44 no-preload-907610 crio[842]: time="2025-11-15T10:34:44.384037187Z" level=info msg="Started container" PID=2549 containerID=920aa9e656fb45f5328d602fa2b8dd54ff93ac542bf3418019ff4b06b673b458 description=default/busybox/busybox id=e7173049-41b1-47d4-ac9c-3ae230502b2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e31675bae77673ac6393f6e9411b6f6dc7a220999ea7e9b8c2babe55e042b12
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	920aa9e656fb4       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   6e31675bae776       busybox                                     default
	b4a8dda4539dc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   cfd3c817da9e5       coredns-66bc5c9577-ql8g6                    kube-system
	f12202c3ec5c7       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   0bf95724bfc2c       storage-provisioner                         kube-system
	64d698fbcd35e       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   490d20f22c73b       kindnet-kgnjv                               kube-system
	a943e4a61dc86       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   67b3330c8ea38       kube-proxy-rh8h4                            kube-system
	7e3eb756f7cd9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   53366e48b5c2b       kube-scheduler-no-preload-907610            kube-system
	9d95941d0f719       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   2dff8bfe931c8       kube-controller-manager-no-preload-907610   kube-system
	caa1c6181b5d7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   d2b57e846ae21       kube-apiserver-no-preload-907610            kube-system
	570546ba75c77       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   76023cf97fe23       etcd-no-preload-907610                      kube-system
	
	
	==> coredns [b4a8dda4539dc6a5b5c2d6a0abce341951f858cc19dcc8cb836f28b9d249f9f7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42970 - 660 "HINFO IN 22966311054619584.5764622489630821562. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.024430148s
	
	
	==> describe nodes <==
	Name:               no-preload-907610
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-907610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=no-preload-907610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-907610
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:34:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:34:47 +0000   Sat, 15 Nov 2025 10:34:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:34:47 +0000   Sat, 15 Nov 2025 10:34:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:34:47 +0000   Sat, 15 Nov 2025 10:34:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:34:47 +0000   Sat, 15 Nov 2025 10:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-907610
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d6341372-a597-4e99-ab89-f00924067763
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-ql8g6                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-907610                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-kgnjv                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-907610             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-907610    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-rh8h4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-907610             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-907610 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-907610 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node no-preload-907610 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-907610 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-907610 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-907610 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-907610 event: Registered Node no-preload-907610 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-907610 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 10:10] overlayfs: idmapped layers are currently not supported
	[Nov15 10:11] overlayfs: idmapped layers are currently not supported
	[Nov15 10:12] overlayfs: idmapped layers are currently not supported
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [570546ba75c77737a6407acd420071a881596b01713d01161f29124685dbfe54] <==
	{"level":"warn","ts":"2025-11-15T10:34:10.536408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.537187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.562837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.599685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.654260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.695584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.718766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.754102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.814148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.885153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.890539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.947168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:10.997168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.020941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.062931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.090247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.154051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.175779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.227162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.294018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.313715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.357997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.402585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.424285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:11.543998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52972","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:34:51 up  5:17,  0 user,  load average: 3.87, 3.49, 2.89
	Linux no-preload-907610 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [64d698fbcd35ee09639db4dcd7085aa948decbc3ffcc137d02b2a1488e9b1e0e] <==
	I1115 10:34:27.812681       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:34:27.813017       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:34:27.813164       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:34:27.813182       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:34:27.813196       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:34:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:34:28.026243       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:34:28.026351       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:34:28.026390       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:34:28.027412       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:34:28.310034       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:34:28.310165       1 metrics.go:72] Registering metrics
	I1115 10:34:28.310250       1 controller.go:711] "Syncing nftables rules"
	I1115 10:34:38.030703       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:34:38.030811       1 main.go:301] handling current node
	I1115 10:34:48.024127       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:34:48.024166       1 main.go:301] handling current node
	
	
	==> kube-apiserver [caa1c6181b5d730843de049f7194d6899ad74a04e94a7c54954566d570ffa65f] <==
	I1115 10:34:12.947920       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:34:12.947951       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:34:12.977438       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:12.977508       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:34:13.009096       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:34:13.083162       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:13.083237       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:34:13.597478       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:34:13.605117       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:34:13.605146       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:34:14.675844       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:34:14.758139       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:34:14.929964       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:34:14.943932       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1115 10:34:14.945369       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:34:14.951082       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:34:15.769112       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:34:16.037714       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:34:16.078485       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:34:16.112814       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:34:21.017849       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:21.026857       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:21.750967       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:34:21.832933       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1115 10:34:50.012599       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:46772: use of closed network connection
	
	
	==> kube-controller-manager [9d95941d0f719a6267178ec8572848cfa6c1e0c65db66174c8a791a291c78124] <==
	I1115 10:34:20.819173       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:34:20.823309       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:34:20.823423       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:34:20.823490       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-907610"
	I1115 10:34:20.823531       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:34:20.847360       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:34:20.847442       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:34:20.847516       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:34:20.847523       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:34:20.847530       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:34:20.847748       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:34:20.847798       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:34:20.855007       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:34:20.856264       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:34:20.857117       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:34:20.866592       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:34:20.866726       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:34:20.866753       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:34:20.866770       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:34:20.866774       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:34:20.866779       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:34:20.877680       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:34:20.877716       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:34:20.958017       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-907610" podCIDRs=["10.244.0.0/24"]
	I1115 10:34:40.826682       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a943e4a61dc86267d183db295f26b2ebe063a85ca7552ae6c3352002ada6db66] <==
	I1115 10:34:22.859988       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:34:22.952136       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:34:23.053722       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:34:23.053760       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:34:23.053842       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:34:23.115949       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:34:23.116000       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:34:23.145733       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:34:23.146088       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:34:23.146105       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:34:23.148966       1 config.go:200] "Starting service config controller"
	I1115 10:34:23.148996       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:34:23.149024       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:34:23.149028       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:34:23.149039       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:34:23.149042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:34:23.153894       1 config.go:309] "Starting node config controller"
	I1115 10:34:23.153907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:34:23.153914       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:34:23.249393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:34:23.249434       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:34:23.249474       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7e3eb756f7cd9b54ccaa1432f9e05456ef7600d653ed33b978c5b35f5b760528] <==
	E1115 10:34:13.238207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:34:13.238282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:34:13.238317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:34:13.238352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:34:13.238382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:34:13.238468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:34:13.238499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:34:13.238613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:34:13.238779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:34:13.238815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:34:13.238849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:34:13.238881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:34:13.238915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:34:13.238950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:34:14.097802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:34:14.126486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:34:14.126576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:34:14.136514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:34:14.197047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:34:14.248245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:34:14.270452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:34:14.351944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:34:14.376684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:34:14.398913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1115 10:34:16.201264       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:34:17 no-preload-907610 kubelet[2003]: I1115 10:34:17.584137    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-907610" podStartSLOduration=1.584111767 podStartE2EDuration="1.584111767s" podCreationTimestamp="2025-11-15 10:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:17.557920216 +0000 UTC m=+1.641942162" watchObservedRunningTime="2025-11-15 10:34:17.584111767 +0000 UTC m=+1.668133697"
	Nov 15 10:34:20 no-preload-907610 kubelet[2003]: I1115 10:34:20.940637    2003 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 10:34:20 no-preload-907610 kubelet[2003]: I1115 10:34:20.946734    2003 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 10:34:21 no-preload-907610 kubelet[2003]: I1115 10:34:21.908996    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcbcr\" (UniqueName: \"kubernetes.io/projected/421d25ba-102f-4638-b4ea-1a99bb7ceab5-kube-api-access-gcbcr\") pod \"kindnet-kgnjv\" (UID: \"421d25ba-102f-4638-b4ea-1a99bb7ceab5\") " pod="kube-system/kindnet-kgnjv"
	Nov 15 10:34:21 no-preload-907610 kubelet[2003]: I1115 10:34:21.909051    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/421d25ba-102f-4638-b4ea-1a99bb7ceab5-lib-modules\") pod \"kindnet-kgnjv\" (UID: \"421d25ba-102f-4638-b4ea-1a99bb7ceab5\") " pod="kube-system/kindnet-kgnjv"
	Nov 15 10:34:21 no-preload-907610 kubelet[2003]: I1115 10:34:21.909079    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/421d25ba-102f-4638-b4ea-1a99bb7ceab5-cni-cfg\") pod \"kindnet-kgnjv\" (UID: \"421d25ba-102f-4638-b4ea-1a99bb7ceab5\") " pod="kube-system/kindnet-kgnjv"
	Nov 15 10:34:21 no-preload-907610 kubelet[2003]: I1115 10:34:21.909098    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/421d25ba-102f-4638-b4ea-1a99bb7ceab5-xtables-lock\") pod \"kindnet-kgnjv\" (UID: \"421d25ba-102f-4638-b4ea-1a99bb7ceab5\") " pod="kube-system/kindnet-kgnjv"
	Nov 15 10:34:22 no-preload-907610 kubelet[2003]: I1115 10:34:22.013902    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/353b68d4-24ef-47ff-9420-6dfd96c66c24-kube-proxy\") pod \"kube-proxy-rh8h4\" (UID: \"353b68d4-24ef-47ff-9420-6dfd96c66c24\") " pod="kube-system/kube-proxy-rh8h4"
	Nov 15 10:34:22 no-preload-907610 kubelet[2003]: I1115 10:34:22.013954    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltv89\" (UniqueName: \"kubernetes.io/projected/353b68d4-24ef-47ff-9420-6dfd96c66c24-kube-api-access-ltv89\") pod \"kube-proxy-rh8h4\" (UID: \"353b68d4-24ef-47ff-9420-6dfd96c66c24\") " pod="kube-system/kube-proxy-rh8h4"
	Nov 15 10:34:22 no-preload-907610 kubelet[2003]: I1115 10:34:22.013981    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/353b68d4-24ef-47ff-9420-6dfd96c66c24-xtables-lock\") pod \"kube-proxy-rh8h4\" (UID: \"353b68d4-24ef-47ff-9420-6dfd96c66c24\") " pod="kube-system/kube-proxy-rh8h4"
	Nov 15 10:34:22 no-preload-907610 kubelet[2003]: I1115 10:34:22.014011    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/353b68d4-24ef-47ff-9420-6dfd96c66c24-lib-modules\") pod \"kube-proxy-rh8h4\" (UID: \"353b68d4-24ef-47ff-9420-6dfd96c66c24\") " pod="kube-system/kube-proxy-rh8h4"
	Nov 15 10:34:22 no-preload-907610 kubelet[2003]: I1115 10:34:22.107877    2003 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 10:34:22 no-preload-907610 kubelet[2003]: W1115 10:34:22.564223    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/crio-67b3330c8ea38af9518a25f771dfe7165f19b04948a4a2f2c1cbebd4cbbb47b4 WatchSource:0}: Error finding container 67b3330c8ea38af9518a25f771dfe7165f19b04948a4a2f2c1cbebd4cbbb47b4: Status 404 returned error can't find the container with id 67b3330c8ea38af9518a25f771dfe7165f19b04948a4a2f2c1cbebd4cbbb47b4
	Nov 15 10:34:23 no-preload-907610 kubelet[2003]: I1115 10:34:23.560900    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rh8h4" podStartSLOduration=2.560882357 podStartE2EDuration="2.560882357s" podCreationTimestamp="2025-11-15 10:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:23.560496866 +0000 UTC m=+7.644518796" watchObservedRunningTime="2025-11-15 10:34:23.560882357 +0000 UTC m=+7.644904278"
	Nov 15 10:34:38 no-preload-907610 kubelet[2003]: I1115 10:34:38.236248    2003 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:34:38 no-preload-907610 kubelet[2003]: I1115 10:34:38.277950    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kgnjv" podStartSLOduration=12.134964724 podStartE2EDuration="17.277925059s" podCreationTimestamp="2025-11-15 10:34:21 +0000 UTC" firstStartedPulling="2025-11-15 10:34:22.538168959 +0000 UTC m=+6.622190881" lastFinishedPulling="2025-11-15 10:34:27.681129294 +0000 UTC m=+11.765151216" observedRunningTime="2025-11-15 10:34:28.575876815 +0000 UTC m=+12.659898745" watchObservedRunningTime="2025-11-15 10:34:38.277925059 +0000 UTC m=+22.361946989"
	Nov 15 10:34:38 no-preload-907610 kubelet[2003]: I1115 10:34:38.455394    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8x2r\" (UniqueName: \"kubernetes.io/projected/217253e2-283b-49d2-8a84-30111b378edd-kube-api-access-p8x2r\") pod \"storage-provisioner\" (UID: \"217253e2-283b-49d2-8a84-30111b378edd\") " pod="kube-system/storage-provisioner"
	Nov 15 10:34:38 no-preload-907610 kubelet[2003]: I1115 10:34:38.455448    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/217253e2-283b-49d2-8a84-30111b378edd-tmp\") pod \"storage-provisioner\" (UID: \"217253e2-283b-49d2-8a84-30111b378edd\") " pod="kube-system/storage-provisioner"
	Nov 15 10:34:38 no-preload-907610 kubelet[2003]: I1115 10:34:38.455473    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce1fc969-663d-4f7e-87db-2b3bf3b6ee52-config-volume\") pod \"coredns-66bc5c9577-ql8g6\" (UID: \"ce1fc969-663d-4f7e-87db-2b3bf3b6ee52\") " pod="kube-system/coredns-66bc5c9577-ql8g6"
	Nov 15 10:34:38 no-preload-907610 kubelet[2003]: I1115 10:34:38.455492    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv9vg\" (UniqueName: \"kubernetes.io/projected/ce1fc969-663d-4f7e-87db-2b3bf3b6ee52-kube-api-access-qv9vg\") pod \"coredns-66bc5c9577-ql8g6\" (UID: \"ce1fc969-663d-4f7e-87db-2b3bf3b6ee52\") " pod="kube-system/coredns-66bc5c9577-ql8g6"
	Nov 15 10:34:38 no-preload-907610 kubelet[2003]: W1115 10:34:38.929346    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/crio-cfd3c817da9e5acd86a8acf827718ec509222e216acb44d33e1ce2f5a53224e4 WatchSource:0}: Error finding container cfd3c817da9e5acd86a8acf827718ec509222e216acb44d33e1ce2f5a53224e4: Status 404 returned error can't find the container with id cfd3c817da9e5acd86a8acf827718ec509222e216acb44d33e1ce2f5a53224e4
	Nov 15 10:34:39 no-preload-907610 kubelet[2003]: I1115 10:34:39.612580    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.612559084 podStartE2EDuration="15.612559084s" podCreationTimestamp="2025-11-15 10:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:39.61229405 +0000 UTC m=+23.696315971" watchObservedRunningTime="2025-11-15 10:34:39.612559084 +0000 UTC m=+23.696581014"
	Nov 15 10:34:39 no-preload-907610 kubelet[2003]: I1115 10:34:39.626890    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ql8g6" podStartSLOduration=17.626872459 podStartE2EDuration="17.626872459s" podCreationTimestamp="2025-11-15 10:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:39.626749894 +0000 UTC m=+23.710771840" watchObservedRunningTime="2025-11-15 10:34:39.626872459 +0000 UTC m=+23.710894381"
	Nov 15 10:34:41 no-preload-907610 kubelet[2003]: I1115 10:34:41.993424    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpczp\" (UniqueName: \"kubernetes.io/projected/9f8722f6-c3d5-4376-a8a0-64c12d93558c-kube-api-access-cpczp\") pod \"busybox\" (UID: \"9f8722f6-c3d5-4376-a8a0-64c12d93558c\") " pod="default/busybox"
	Nov 15 10:34:42 no-preload-907610 kubelet[2003]: W1115 10:34:42.235975    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/crio-6e31675bae77673ac6393f6e9411b6f6dc7a220999ea7e9b8c2babe55e042b12 WatchSource:0}: Error finding container 6e31675bae77673ac6393f6e9411b6f6dc7a220999ea7e9b8c2babe55e042b12: Status 404 returned error can't find the container with id 6e31675bae77673ac6393f6e9411b6f6dc7a220999ea7e9b8c2babe55e042b12
	
	
	==> storage-provisioner [f12202c3ec5c71a65b8a227d3a55645fc0aeff7b5f5a70c427229770e8d8a3d0] <==
	I1115 10:34:38.969153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:34:39.014118       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:34:39.014200       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:34:39.017776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:39.027786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:34:39.028051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:34:39.028932       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-907610_bcedb1ad-6d7b-426f-b16d-8865adbb8500!
	I1115 10:34:39.028660       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd672018-af9b-4d26-a795-58bf6d65cf94", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-907610_bcedb1ad-6d7b-426f-b16d-8865adbb8500 became leader
	W1115 10:34:39.041200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:39.045920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:34:39.129496       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-907610_bcedb1ad-6d7b-426f-b16d-8865adbb8500!
	W1115 10:34:41.050503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:41.055635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:43.058362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:43.065395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:45.083506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:45.098372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:47.101375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:47.105637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:49.109557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:49.116352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:51.120522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:34:51.129868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-907610 -n no-preload-907610
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-907610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (282.145005ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-531596 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-531596 describe deploy/metrics-server -n kube-system: exit status 1 (88.99689ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-531596 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-531596
helpers_test.go:243: (dbg) docker inspect embed-certs-531596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c",
	        "Created": "2025-11-15T10:34:04.609645199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 705361,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:34:04.711899378Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/hosts",
	        "LogPath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c-json.log",
	        "Name": "/embed-certs-531596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-531596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-531596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c",
	                "LowerDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-531596",
	                "Source": "/var/lib/docker/volumes/embed-certs-531596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-531596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-531596",
	                "name.minikube.sigs.k8s.io": "embed-certs-531596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "812b59df44838727ab482f252c916669c42d30ac8b6f59f3f800e2c13917ec09",
	            "SandboxKey": "/var/run/docker/netns/812b59df4483",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-531596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:35:24:b3:10:57",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3f5512ac8a850c62b7a0512f3192588adf3870d53b8a37838ac0a556f7411b44",
	                    "EndpointID": "76fefca2f4be7af06fd9a26c31e04f57ddc63733c9120d109a1dfdc9bee68a4b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-531596",
	                        "6743ffb16c2e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-531596 -n embed-certs-531596
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-531596 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-531596 logs -n 25: (1.278517147s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p kubernetes-upgrade-480353                                                                                                                                                                                                                  │ kubernetes-upgrade-480353 │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:29 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-845026    │ jenkins │ v1.37.0 │ 15 Nov 25 10:29 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p force-systemd-env-683299                                                                                                                                                                                                                   │ force-systemd-env-683299  │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p cert-options-115480 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ cert-options-115480 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ ssh     │ -p cert-options-115480 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-115480                                                                                                                                                                                                                        │ cert-options-115480       │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-448285 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-448285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ image   │ old-k8s-version-448285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-448285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-845026    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610         │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-845026                                                                                                                                                                                                                     │ cert-expiration-845026    │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596        │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-907610         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ stop    │ -p no-preload-907610 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-907610         │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p no-preload-907610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-907610         │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610         │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-531596        │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:35:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:35:04.869095  708863 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:04.869209  708863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:04.869220  708863 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:04.869232  708863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:04.869590  708863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:35:04.870373  708863 out.go:368] Setting JSON to false
	I1115 10:35:04.871358  708863 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19056,"bootTime":1763183849,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:35:04.871428  708863 start.go:143] virtualization:  
	I1115 10:35:04.874442  708863 out.go:179] * [no-preload-907610] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:35:04.878438  708863 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:35:04.878516  708863 notify.go:221] Checking for updates...
	I1115 10:35:04.884361  708863 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:35:04.887257  708863 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:35:04.890310  708863 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:35:04.893154  708863 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:35:04.896070  708863 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:35:04.899350  708863 config.go:182] Loaded profile config "no-preload-907610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:04.899885  708863 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:35:04.927547  708863 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:35:04.928287  708863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:04.997482  708863 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:35:04.978872939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:35:04.997648  708863 docker.go:319] overlay module found
	I1115 10:35:05.007177  708863 out.go:179] * Using the docker driver based on existing profile
	I1115 10:35:05.010154  708863 start.go:309] selected driver: docker
	I1115 10:35:05.010198  708863 start.go:930] validating driver "docker" against &{Name:no-preload-907610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-907610 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:05.010316  708863 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:35:05.011073  708863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:05.077107  708863 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:35:05.066770383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:35:05.077461  708863 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:05.077496  708863 cni.go:84] Creating CNI manager for ""
	I1115 10:35:05.077554  708863 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:05.077632  708863 start.go:353] cluster config:
	{Name:no-preload-907610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-907610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:05.081213  708863 out.go:179] * Starting "no-preload-907610" primary control-plane node in "no-preload-907610" cluster
	I1115 10:35:05.084243  708863 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:35:05.088004  708863 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:35:05.090831  708863 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:35:05.091017  708863 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:05.091140  708863 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/config.json ...
	I1115 10:35:05.091464  708863 cache.go:107] acquiring lock: {Name:mk487e043ec48b0dcf646150be88ab18dcd8913d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:05.091556  708863 cache.go:115] /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 10:35:05.091570  708863 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 115.673µs
	I1115 10:35:05.091579  708863 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 10:35:05.091660  708863 cache.go:107] acquiring lock: {Name:mk80ef78d6bb74e7785daf0bf71d4e3ee6a5b294 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:05.091709  708863 cache.go:115] /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 10:35:05.091718  708863 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 63.925µs
	I1115 10:35:05.091729  708863 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 10:35:05.091740  708863 cache.go:107] acquiring lock: {Name:mk7a82492de0432fab25b698513c1579ac845ad7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:05.091775  708863 cache.go:115] /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 10:35:05.091784  708863 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 45.422µs
	I1115 10:35:05.091790  708863 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 10:35:05.091799  708863 cache.go:107] acquiring lock: {Name:mk1d32e7ea70d0ae9d2d2092a1f559c739d84089 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:05.091831  708863 cache.go:115] /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 10:35:05.091841  708863 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 43.388µs
	I1115 10:35:05.091847  708863 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 10:35:05.091856  708863 cache.go:107] acquiring lock: {Name:mk15cefebaeaa079e418228433f7a82ae70f7148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:05.091882  708863 cache.go:115] /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 10:35:05.091887  708863 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 32.41µs
	I1115 10:35:05.091893  708863 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 10:35:05.091903  708863 cache.go:107] acquiring lock: {Name:mk0661623c091880342d48dd210ba531019999b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:05.091935  708863 cache.go:115] /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 10:35:05.091941  708863 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 38.579µs
	I1115 10:35:05.091946  708863 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 10:35:05.091955  708863 cache.go:107] acquiring lock: {Name:mka4a8e06e559463d396938b336791a181f1c355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:05.091980  708863 cache.go:115] /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1115 10:35:05.091992  708863 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 37.907µs
	I1115 10:35:05.091998  708863 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 10:35:05.092008  708863 cache.go:107] acquiring lock: {Name:mkf85ff5488e9b919ae99b70dca04875c839f1d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:05.092040  708863 cache.go:115] /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 10:35:05.092048  708863 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 41.106µs
	I1115 10:35:05.092063  708863 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 10:35:05.092074  708863 cache.go:87] Successfully saved all images to host disk.
	I1115 10:35:05.110408  708863 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:35:05.110431  708863 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:35:05.110449  708863 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:35:05.110473  708863 start.go:360] acquireMachinesLock for no-preload-907610: {Name:mk46d590fe707bfb8ee190d711cd42dcf2739e99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:05.110523  708863 start.go:364] duration metric: took 35.42µs to acquireMachinesLock for "no-preload-907610"
	I1115 10:35:05.110544  708863 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:35:05.110555  708863 fix.go:54] fixHost starting: 
	I1115 10:35:05.110814  708863 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:35:05.130952  708863 fix.go:112] recreateIfNeeded on no-preload-907610: state=Stopped err=<nil>
	W1115 10:35:05.130984  708863 fix.go:138] unexpected machine state, will restart: <nil>
	W1115 10:35:05.125417  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	W1115 10:35:07.624728  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	I1115 10:35:05.134335  708863 out.go:252] * Restarting existing docker container for "no-preload-907610" ...
	I1115 10:35:05.134428  708863 cli_runner.go:164] Run: docker start no-preload-907610
	I1115 10:35:05.415330  708863 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:35:05.438199  708863 kic.go:430] container "no-preload-907610" state is running.
	I1115 10:35:05.438719  708863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-907610
	I1115 10:35:05.463268  708863 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/config.json ...
	I1115 10:35:05.463499  708863 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:05.463563  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:05.485490  708863 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:05.485844  708863 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33799 <nil> <nil>}
	I1115 10:35:05.485861  708863 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:05.486750  708863 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:35:08.641443  708863 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-907610
	
	I1115 10:35:08.641470  708863 ubuntu.go:182] provisioning hostname "no-preload-907610"
	I1115 10:35:08.641546  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:08.659829  708863 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:08.660384  708863 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33799 <nil> <nil>}
	I1115 10:35:08.660404  708863 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-907610 && echo "no-preload-907610" | sudo tee /etc/hostname
	I1115 10:35:08.823706  708863 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-907610
	
	I1115 10:35:08.823804  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:08.843647  708863 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:08.843971  708863 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33799 <nil> <nil>}
	I1115 10:35:08.843991  708863 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-907610' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-907610/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-907610' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:08.993879  708863 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:08.993907  708863 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:35:08.993935  708863 ubuntu.go:190] setting up certificates
	I1115 10:35:08.993947  708863 provision.go:84] configureAuth start
	I1115 10:35:08.994008  708863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-907610
	I1115 10:35:09.013904  708863 provision.go:143] copyHostCerts
	I1115 10:35:09.013978  708863 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:35:09.013993  708863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:35:09.014073  708863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:35:09.014173  708863 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:35:09.014182  708863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:35:09.014210  708863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:35:09.014279  708863 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:35:09.014289  708863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:35:09.014317  708863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:35:09.014372  708863 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.no-preload-907610 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-907610]
	I1115 10:35:09.220111  708863 provision.go:177] copyRemoteCerts
	I1115 10:35:09.220213  708863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:09.220295  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:09.239227  708863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:35:09.347546  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:09.370136  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:35:09.390475  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:35:09.408691  708863 provision.go:87] duration metric: took 414.729245ms to configureAuth
	I1115 10:35:09.408729  708863 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:09.408970  708863 config.go:182] Loaded profile config "no-preload-907610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:09.409092  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:09.426599  708863 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:09.426999  708863 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33799 <nil> <nil>}
	I1115 10:35:09.427021  708863 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:35:09.760022  708863 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:09.760048  708863 machine.go:97] duration metric: took 4.296540621s to provisionDockerMachine
	I1115 10:35:09.760060  708863 start.go:293] postStartSetup for "no-preload-907610" (driver="docker")
	I1115 10:35:09.760070  708863 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:09.760145  708863 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:09.760212  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:09.779815  708863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:35:09.885543  708863 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:09.888759  708863 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:09.888827  708863 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:09.888844  708863 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:35:09.888898  708863 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:35:09.888983  708863 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:35:09.889092  708863 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:09.896333  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:35:09.914158  708863 start.go:296] duration metric: took 154.077009ms for postStartSetup
	I1115 10:35:09.914298  708863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:09.914373  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:09.931212  708863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:35:10.035042  708863 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:10.040232  708863 fix.go:56] duration metric: took 4.929670519s for fixHost
	I1115 10:35:10.040257  708863 start.go:83] releasing machines lock for "no-preload-907610", held for 4.929724376s
	I1115 10:35:10.040340  708863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-907610
	I1115 10:35:10.057190  708863 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:10.057245  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:10.057671  708863 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:10.057731  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:10.082347  708863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:35:10.095414  708863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:35:10.307538  708863 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:10.314327  708863 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:10.349680  708863 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:10.354138  708863 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:10.354212  708863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:10.362755  708863 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:10.362780  708863 start.go:496] detecting cgroup driver to use...
	I1115 10:35:10.362841  708863 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:10.362908  708863 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:10.383261  708863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:10.395794  708863 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:10.395900  708863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:10.412266  708863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:10.425772  708863 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:10.546158  708863 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:10.675108  708863 docker.go:234] disabling docker service ...
	I1115 10:35:10.675191  708863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:10.691332  708863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:10.704066  708863 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:10.821399  708863 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:10.960858  708863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:10.974320  708863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:10.989655  708863 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:10.989746  708863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:10.999483  708863 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:10.999555  708863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:11.017161  708863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:11.027290  708863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:11.035988  708863 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:11.044219  708863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:11.053529  708863 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:11.062594  708863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:11.071862  708863 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:11.080777  708863 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:11.090139  708863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:11.225728  708863 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:11.371375  708863 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:11.371468  708863 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:11.375858  708863 start.go:564] Will wait 60s for crictl version
	I1115 10:35:11.375948  708863 ssh_runner.go:195] Run: which crictl
	I1115 10:35:11.380737  708863 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:11.405923  708863 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:11.406077  708863 ssh_runner.go:195] Run: crio --version
	I1115 10:35:11.437023  708863 ssh_runner.go:195] Run: crio --version
	I1115 10:35:11.473488  708863 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:35:11.476379  708863 cli_runner.go:164] Run: docker network inspect no-preload-907610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:11.493664  708863 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:11.498082  708863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:11.508377  708863 kubeadm.go:884] updating cluster {Name:no-preload-907610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-907610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:11.508498  708863 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:11.508544  708863 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:11.542174  708863 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:11.542199  708863 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:11.542213  708863 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:11.542309  708863 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-907610 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-907610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:11.542404  708863 ssh_runner.go:195] Run: crio config
	I1115 10:35:11.608800  708863 cni.go:84] Creating CNI manager for ""
	I1115 10:35:11.608822  708863 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:11.608845  708863 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:11.608874  708863 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-907610 NodeName:no-preload-907610 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:11.609008  708863 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-907610"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:11.609089  708863 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:11.618059  708863 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:11.618172  708863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:11.626884  708863 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 10:35:11.640611  708863 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:11.653566  708863 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:35:11.666246  708863 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:11.669719  708863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:11.678987  708863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:11.809053  708863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:11.827178  708863 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610 for IP: 192.168.85.2
	I1115 10:35:11.827198  708863 certs.go:195] generating shared ca certs ...
	I1115 10:35:11.827215  708863 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:11.827349  708863 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:35:11.827402  708863 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:35:11.827426  708863 certs.go:257] generating profile certs ...
	I1115 10:35:11.827514  708863 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.key
	I1115 10:35:11.827591  708863 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/apiserver.key.608d3f0e
	I1115 10:35:11.827631  708863 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/proxy-client.key
	I1115 10:35:11.827749  708863 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:35:11.827782  708863 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:11.827790  708863 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:35:11.827818  708863 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:11.827845  708863 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:11.827870  708863 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:35:11.827916  708863 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:35:11.828582  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:11.854780  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:11.877703  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:11.898860  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:11.923451  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:35:11.945405  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:35:11.968837  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:12.009818  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:35:12.041016  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:35:12.071364  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:12.098110  708863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:35:12.118757  708863 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:12.134313  708863 ssh_runner.go:195] Run: openssl version
	I1115 10:35:12.142986  708863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:35:12.154150  708863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:35:12.158144  708863 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:35:12.158209  708863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:35:12.202887  708863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:12.212503  708863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:12.222297  708863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:12.225980  708863 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:12.226045  708863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:12.266934  708863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:12.275123  708863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:35:12.283671  708863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:35:12.287554  708863 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:35:12.287621  708863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:35:12.329716  708863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:12.337880  708863 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:12.341683  708863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:12.384725  708863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:12.426241  708863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:12.467356  708863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:12.519793  708863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:12.576805  708863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:12.646298  708863 kubeadm.go:401] StartCluster: {Name:no-preload-907610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-907610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:12.646393  708863 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:12.646463  708863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:12.711553  708863 cri.go:89] found id: "fbba8e0ca18f1bb361aade61f62504b671e8e02da9e21dc771c669d6472159f2"
	I1115 10:35:12.711584  708863 cri.go:89] found id: "2595af5ed79b0d008d0a4a9885bb6bb2d922c8e0fc4984e57ea1078e606230d7"
	I1115 10:35:12.711589  708863 cri.go:89] found id: "aa8b90296193a6708fd35513c6745262f53b36234f1f69ebb1d6aee50a60dfcd"
	I1115 10:35:12.711592  708863 cri.go:89] found id: ""
	I1115 10:35:12.711647  708863 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:12.753979  708863 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:12Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:12.754103  708863 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:12.771148  708863 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:12.771169  708863 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:12.771235  708863 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:12.784937  708863 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:12.785925  708863 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-907610" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:35:12.786509  708863 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-514793/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-907610" cluster setting kubeconfig missing "no-preload-907610" context setting]
	I1115 10:35:12.787455  708863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:12.789446  708863 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:12.803996  708863 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:35:12.804032  708863 kubeadm.go:602] duration metric: took 32.856136ms to restartPrimaryControlPlane
	I1115 10:35:12.804042  708863 kubeadm.go:403] duration metric: took 157.755791ms to StartCluster
	I1115 10:35:12.804057  708863 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:12.804122  708863 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:35:12.805655  708863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:12.805938  708863 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:12.806419  708863 config.go:182] Loaded profile config "no-preload-907610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:12.806405  708863 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:12.806536  708863 addons.go:70] Setting storage-provisioner=true in profile "no-preload-907610"
	I1115 10:35:12.806552  708863 addons.go:239] Setting addon storage-provisioner=true in "no-preload-907610"
	W1115 10:35:12.806559  708863 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:12.806588  708863 host.go:66] Checking if "no-preload-907610" exists ...
	I1115 10:35:12.807106  708863 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:35:12.809120  708863 addons.go:70] Setting default-storageclass=true in profile "no-preload-907610"
	I1115 10:35:12.809188  708863 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-907610"
	I1115 10:35:12.809808  708863 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:12.810177  708863 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:35:12.810861  708863 addons.go:70] Setting dashboard=true in profile "no-preload-907610"
	I1115 10:35:12.810890  708863 addons.go:239] Setting addon dashboard=true in "no-preload-907610"
	W1115 10:35:12.810910  708863 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:12.810945  708863 host.go:66] Checking if "no-preload-907610" exists ...
	I1115 10:35:12.811453  708863 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:35:12.816398  708863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:12.855733  708863 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:12.859060  708863 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:12.859081  708863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:12.859147  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:12.868123  708863 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:12.871506  708863 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1115 10:35:09.624799  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	W1115 10:35:12.124823  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	I1115 10:35:12.875240  708863 addons.go:239] Setting addon default-storageclass=true in "no-preload-907610"
	W1115 10:35:12.875263  708863 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:12.875288  708863 host.go:66] Checking if "no-preload-907610" exists ...
	I1115 10:35:12.875694  708863 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:35:12.878357  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:12.878386  708863 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:12.878459  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:12.911898  708863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:35:12.934459  708863 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:12.934485  708863 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:12.934552  708863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:35:12.934784  708863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:35:12.973591  708863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:35:13.143515  708863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:13.177097  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:13.177167  708863 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:13.238946  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:13.239018  708863 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:13.258160  708863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:13.278939  708863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:13.307312  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:13.307333  708863 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:13.356997  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:13.357017  708863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:13.410764  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:13.410829  708863 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:13.433490  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:13.433557  708863 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:13.457766  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:13.457832  708863 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:13.495168  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:13.495231  708863 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:13.522558  708863 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:13.522621  708863 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:13.546641  708863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:18.293765  708863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.150170737s)
	I1115 10:35:18.293825  708863 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.035572925s)
	I1115 10:35:18.293855  708863 node_ready.go:35] waiting up to 6m0s for node "no-preload-907610" to be "Ready" ...
	I1115 10:35:18.294154  708863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.015192758s)
	I1115 10:35:18.294404  708863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.747690277s)
	I1115 10:35:18.297371  708863 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-907610 addons enable metrics-server
	
	I1115 10:35:18.318837  708863 node_ready.go:49] node "no-preload-907610" is "Ready"
	I1115 10:35:18.318872  708863 node_ready.go:38] duration metric: took 25.000387ms for node "no-preload-907610" to be "Ready" ...
	I1115 10:35:18.318886  708863 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:18.318943  708863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:18.329851  708863 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1115 10:35:14.125661  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	W1115 10:35:16.625752  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	I1115 10:35:18.332631  708863 addons.go:515] duration metric: took 5.526221747s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:35:18.335086  708863 api_server.go:72] duration metric: took 5.529100067s to wait for apiserver process to appear ...
	I1115 10:35:18.335125  708863 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:18.335144  708863 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 10:35:18.348768  708863 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 10:35:18.350062  708863 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:18.350091  708863 api_server.go:131] duration metric: took 14.957549ms to wait for apiserver health ...
	I1115 10:35:18.350100  708863 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:18.354915  708863 system_pods.go:59] 8 kube-system pods found
	I1115 10:35:18.354993  708863 system_pods.go:61] "coredns-66bc5c9577-ql8g6" [ce1fc969-663d-4f7e-87db-2b3bf3b6ee52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:18.355038  708863 system_pods.go:61] "etcd-no-preload-907610" [bdb23ef6-7c48-4d11-87cf-8671dbc10308] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:18.355061  708863 system_pods.go:61] "kindnet-kgnjv" [421d25ba-102f-4638-b4ea-1a99bb7ceab5] Running
	I1115 10:35:18.355100  708863 system_pods.go:61] "kube-apiserver-no-preload-907610" [d60fa2cd-faf5-45af-b8b5-03c138a84759] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:18.355132  708863 system_pods.go:61] "kube-controller-manager-no-preload-907610" [d2a703b6-8242-441a-9fb7-c4c20606e79d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:18.355158  708863 system_pods.go:61] "kube-proxy-rh8h4" [353b68d4-24ef-47ff-9420-6dfd96c66c24] Running
	I1115 10:35:18.355198  708863 system_pods.go:61] "kube-scheduler-no-preload-907610" [ea7c79c6-17f9-4a30-9dc9-43c8a47a8fb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:18.355221  708863 system_pods.go:61] "storage-provisioner" [217253e2-283b-49d2-8a84-30111b378edd] Running
	I1115 10:35:18.355260  708863 system_pods.go:74] duration metric: took 5.152825ms to wait for pod list to return data ...
	I1115 10:35:18.355287  708863 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:18.358053  708863 default_sa.go:45] found service account: "default"
	I1115 10:35:18.358116  708863 default_sa.go:55] duration metric: took 2.809489ms for default service account to be created ...
	I1115 10:35:18.358140  708863 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:18.360759  708863 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:18.360826  708863 system_pods.go:89] "coredns-66bc5c9577-ql8g6" [ce1fc969-663d-4f7e-87db-2b3bf3b6ee52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:18.360870  708863 system_pods.go:89] "etcd-no-preload-907610" [bdb23ef6-7c48-4d11-87cf-8671dbc10308] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:35:18.360898  708863 system_pods.go:89] "kindnet-kgnjv" [421d25ba-102f-4638-b4ea-1a99bb7ceab5] Running
	I1115 10:35:18.360943  708863 system_pods.go:89] "kube-apiserver-no-preload-907610" [d60fa2cd-faf5-45af-b8b5-03c138a84759] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:35:18.360972  708863 system_pods.go:89] "kube-controller-manager-no-preload-907610" [d2a703b6-8242-441a-9fb7-c4c20606e79d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:35:18.360997  708863 system_pods.go:89] "kube-proxy-rh8h4" [353b68d4-24ef-47ff-9420-6dfd96c66c24] Running
	I1115 10:35:18.361031  708863 system_pods.go:89] "kube-scheduler-no-preload-907610" [ea7c79c6-17f9-4a30-9dc9-43c8a47a8fb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:35:18.361055  708863 system_pods.go:89] "storage-provisioner" [217253e2-283b-49d2-8a84-30111b378edd] Running
	I1115 10:35:18.361077  708863 system_pods.go:126] duration metric: took 2.916899ms to wait for k8s-apps to be running ...
	I1115 10:35:18.361114  708863 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:18.361206  708863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:18.381747  708863 system_svc.go:56] duration metric: took 20.622827ms WaitForService to wait for kubelet
	I1115 10:35:18.381782  708863 kubeadm.go:587] duration metric: took 5.575792486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:18.381801  708863 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:18.384516  708863 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:35:18.384552  708863 node_conditions.go:123] node cpu capacity is 2
	I1115 10:35:18.384566  708863 node_conditions.go:105] duration metric: took 2.758651ms to run NodePressure ...
	I1115 10:35:18.384579  708863 start.go:242] waiting for startup goroutines ...
	I1115 10:35:18.384587  708863 start.go:247] waiting for cluster config update ...
	I1115 10:35:18.384597  708863 start.go:256] writing updated cluster config ...
	I1115 10:35:18.384900  708863 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:18.389460  708863 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:18.393241  708863 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ql8g6" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:35:19.125245  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	W1115 10:35:21.628383  704900 node_ready.go:57] node "embed-certs-531596" has "Ready":"False" status (will retry)
	I1115 10:35:22.625860  704900 node_ready.go:49] node "embed-certs-531596" is "Ready"
	I1115 10:35:22.625884  704900 node_ready.go:38] duration metric: took 40.00439007s for node "embed-certs-531596" to be "Ready" ...
	I1115 10:35:22.625898  704900 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:35:22.625957  704900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:35:22.641052  704900 api_server.go:72] duration metric: took 41.361530739s to wait for apiserver process to appear ...
	I1115 10:35:22.641078  704900 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:35:22.641098  704900 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:35:22.650425  704900 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:35:22.651488  704900 api_server.go:141] control plane version: v1.34.1
	I1115 10:35:22.651519  704900 api_server.go:131] duration metric: took 10.433433ms to wait for apiserver health ...
	I1115 10:35:22.651528  704900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:35:22.656542  704900 system_pods.go:59] 8 kube-system pods found
	I1115 10:35:22.656623  704900 system_pods.go:61] "coredns-66bc5c9577-sl29r" [01a3916e-f489-4ca0-aa5f-05b2370df255] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:22.656647  704900 system_pods.go:61] "etcd-embed-certs-531596" [0c093954-3401-4c0a-8691-4d5253364a1b] Running
	I1115 10:35:22.656683  704900 system_pods.go:61] "kindnet-9pzmc" [cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22] Running
	I1115 10:35:22.656711  704900 system_pods.go:61] "kube-apiserver-embed-certs-531596" [ec3eab77-05c0-40b7-b2ba-8610e4e2f33c] Running
	I1115 10:35:22.656736  704900 system_pods.go:61] "kube-controller-manager-embed-certs-531596" [2de4cb12-0355-48ca-8288-595acb3acfc9] Running
	I1115 10:35:22.656760  704900 system_pods.go:61] "kube-proxy-nqfl8" [32c8087e-941a-4953-ae21-a83d98b0fc8f] Running
	I1115 10:35:22.656794  704900 system_pods.go:61] "kube-scheduler-embed-certs-531596" [dfe838b5-ff4d-45ba-b012-1a8e6c155b63] Running
	I1115 10:35:22.656824  704900 system_pods.go:61] "storage-provisioner" [2feb3053-812e-439e-b003-38aa75d3cf38] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:22.656848  704900 system_pods.go:74] duration metric: took 5.312607ms to wait for pod list to return data ...
	I1115 10:35:22.656871  704900 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:35:22.659829  704900 default_sa.go:45] found service account: "default"
	I1115 10:35:22.659849  704900 default_sa.go:55] duration metric: took 2.946503ms for default service account to be created ...
	I1115 10:35:22.659859  704900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:35:22.665442  704900 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:22.665470  704900 system_pods.go:89] "coredns-66bc5c9577-sl29r" [01a3916e-f489-4ca0-aa5f-05b2370df255] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:22.665476  704900 system_pods.go:89] "etcd-embed-certs-531596" [0c093954-3401-4c0a-8691-4d5253364a1b] Running
	I1115 10:35:22.665482  704900 system_pods.go:89] "kindnet-9pzmc" [cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22] Running
	I1115 10:35:22.665487  704900 system_pods.go:89] "kube-apiserver-embed-certs-531596" [ec3eab77-05c0-40b7-b2ba-8610e4e2f33c] Running
	I1115 10:35:22.665493  704900 system_pods.go:89] "kube-controller-manager-embed-certs-531596" [2de4cb12-0355-48ca-8288-595acb3acfc9] Running
	I1115 10:35:22.665498  704900 system_pods.go:89] "kube-proxy-nqfl8" [32c8087e-941a-4953-ae21-a83d98b0fc8f] Running
	I1115 10:35:22.665503  704900 system_pods.go:89] "kube-scheduler-embed-certs-531596" [dfe838b5-ff4d-45ba-b012-1a8e6c155b63] Running
	I1115 10:35:22.665509  704900 system_pods.go:89] "storage-provisioner" [2feb3053-812e-439e-b003-38aa75d3cf38] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:22.665534  704900 retry.go:31] will retry after 301.883927ms: missing components: kube-dns
	I1115 10:35:22.994244  704900 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:22.994331  704900 system_pods.go:89] "coredns-66bc5c9577-sl29r" [01a3916e-f489-4ca0-aa5f-05b2370df255] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:22.994356  704900 system_pods.go:89] "etcd-embed-certs-531596" [0c093954-3401-4c0a-8691-4d5253364a1b] Running
	I1115 10:35:22.994396  704900 system_pods.go:89] "kindnet-9pzmc" [cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22] Running
	I1115 10:35:22.994421  704900 system_pods.go:89] "kube-apiserver-embed-certs-531596" [ec3eab77-05c0-40b7-b2ba-8610e4e2f33c] Running
	I1115 10:35:22.994446  704900 system_pods.go:89] "kube-controller-manager-embed-certs-531596" [2de4cb12-0355-48ca-8288-595acb3acfc9] Running
	I1115 10:35:22.994472  704900 system_pods.go:89] "kube-proxy-nqfl8" [32c8087e-941a-4953-ae21-a83d98b0fc8f] Running
	I1115 10:35:22.994506  704900 system_pods.go:89] "kube-scheduler-embed-certs-531596" [dfe838b5-ff4d-45ba-b012-1a8e6c155b63] Running
	I1115 10:35:22.994547  704900 system_pods.go:89] "storage-provisioner" [2feb3053-812e-439e-b003-38aa75d3cf38] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:22.994584  704900 retry.go:31] will retry after 329.703794ms: missing components: kube-dns
	I1115 10:35:23.327951  704900 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:23.327987  704900 system_pods.go:89] "coredns-66bc5c9577-sl29r" [01a3916e-f489-4ca0-aa5f-05b2370df255] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:35:23.327996  704900 system_pods.go:89] "etcd-embed-certs-531596" [0c093954-3401-4c0a-8691-4d5253364a1b] Running
	I1115 10:35:23.328003  704900 system_pods.go:89] "kindnet-9pzmc" [cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22] Running
	I1115 10:35:23.328008  704900 system_pods.go:89] "kube-apiserver-embed-certs-531596" [ec3eab77-05c0-40b7-b2ba-8610e4e2f33c] Running
	I1115 10:35:23.328012  704900 system_pods.go:89] "kube-controller-manager-embed-certs-531596" [2de4cb12-0355-48ca-8288-595acb3acfc9] Running
	I1115 10:35:23.328016  704900 system_pods.go:89] "kube-proxy-nqfl8" [32c8087e-941a-4953-ae21-a83d98b0fc8f] Running
	I1115 10:35:23.328021  704900 system_pods.go:89] "kube-scheduler-embed-certs-531596" [dfe838b5-ff4d-45ba-b012-1a8e6c155b63] Running
	I1115 10:35:23.328027  704900 system_pods.go:89] "storage-provisioner" [2feb3053-812e-439e-b003-38aa75d3cf38] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:35:23.328044  704900 retry.go:31] will retry after 394.555925ms: missing components: kube-dns
	W1115 10:35:20.399655  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:22.410023  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	I1115 10:35:23.726424  704900 system_pods.go:86] 8 kube-system pods found
	I1115 10:35:23.726458  704900 system_pods.go:89] "coredns-66bc5c9577-sl29r" [01a3916e-f489-4ca0-aa5f-05b2370df255] Running
	I1115 10:35:23.726465  704900 system_pods.go:89] "etcd-embed-certs-531596" [0c093954-3401-4c0a-8691-4d5253364a1b] Running
	I1115 10:35:23.726470  704900 system_pods.go:89] "kindnet-9pzmc" [cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22] Running
	I1115 10:35:23.726475  704900 system_pods.go:89] "kube-apiserver-embed-certs-531596" [ec3eab77-05c0-40b7-b2ba-8610e4e2f33c] Running
	I1115 10:35:23.726479  704900 system_pods.go:89] "kube-controller-manager-embed-certs-531596" [2de4cb12-0355-48ca-8288-595acb3acfc9] Running
	I1115 10:35:23.726483  704900 system_pods.go:89] "kube-proxy-nqfl8" [32c8087e-941a-4953-ae21-a83d98b0fc8f] Running
	I1115 10:35:23.726488  704900 system_pods.go:89] "kube-scheduler-embed-certs-531596" [dfe838b5-ff4d-45ba-b012-1a8e6c155b63] Running
	I1115 10:35:23.726492  704900 system_pods.go:89] "storage-provisioner" [2feb3053-812e-439e-b003-38aa75d3cf38] Running
	I1115 10:35:23.726499  704900 system_pods.go:126] duration metric: took 1.066635493s to wait for k8s-apps to be running ...
	I1115 10:35:23.726510  704900 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:35:23.726566  704900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:35:23.745300  704900 system_svc.go:56] duration metric: took 18.779383ms WaitForService to wait for kubelet
	I1115 10:35:23.745324  704900 kubeadm.go:587] duration metric: took 42.465808163s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:23.745342  704900 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:35:23.750065  704900 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:35:23.750099  704900 node_conditions.go:123] node cpu capacity is 2
	I1115 10:35:23.750113  704900 node_conditions.go:105] duration metric: took 4.765438ms to run NodePressure ...
	I1115 10:35:23.750126  704900 start.go:242] waiting for startup goroutines ...
	I1115 10:35:23.750133  704900 start.go:247] waiting for cluster config update ...
	I1115 10:35:23.750144  704900 start.go:256] writing updated cluster config ...
	I1115 10:35:23.750444  704900 ssh_runner.go:195] Run: rm -f paused
	I1115 10:35:23.754386  704900 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:23.759143  704900 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sl29r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.764562  704900 pod_ready.go:94] pod "coredns-66bc5c9577-sl29r" is "Ready"
	I1115 10:35:23.764585  704900 pod_ready.go:86] duration metric: took 5.420551ms for pod "coredns-66bc5c9577-sl29r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.767130  704900 pod_ready.go:83] waiting for pod "etcd-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.773173  704900 pod_ready.go:94] pod "etcd-embed-certs-531596" is "Ready"
	I1115 10:35:23.773200  704900 pod_ready.go:86] duration metric: took 6.045781ms for pod "etcd-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.775387  704900 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.779883  704900 pod_ready.go:94] pod "kube-apiserver-embed-certs-531596" is "Ready"
	I1115 10:35:23.779954  704900 pod_ready.go:86] duration metric: took 4.497508ms for pod "kube-apiserver-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:23.782985  704900 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.159183  704900 pod_ready.go:94] pod "kube-controller-manager-embed-certs-531596" is "Ready"
	I1115 10:35:24.159216  704900 pod_ready.go:86] duration metric: took 376.204651ms for pod "kube-controller-manager-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.358826  704900 pod_ready.go:83] waiting for pod "kube-proxy-nqfl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.758770  704900 pod_ready.go:94] pod "kube-proxy-nqfl8" is "Ready"
	I1115 10:35:24.758801  704900 pod_ready.go:86] duration metric: took 399.948236ms for pod "kube-proxy-nqfl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:24.959168  704900 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:25.359335  704900 pod_ready.go:94] pod "kube-scheduler-embed-certs-531596" is "Ready"
	I1115 10:35:25.359360  704900 pod_ready.go:86] duration metric: took 400.160677ms for pod "kube-scheduler-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:25.359374  704900 pod_ready.go:40] duration metric: took 1.60495614s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:35:25.448492  704900 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:35:25.455520  704900 out.go:179] * Done! kubectl is now configured to use "embed-certs-531596" cluster and "default" namespace by default
	W1115 10:35:24.899933  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:26.900177  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:29.398861  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:31.399841  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:33.898956  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 10:35:22 embed-certs-531596 crio[840]: time="2025-11-15T10:35:22.858976688Z" level=info msg="Created container af3b63daae94cbb15912f2cb245b11cef98c207121ce5451f337c9ad8ce8a63d: kube-system/coredns-66bc5c9577-sl29r/coredns" id=7b36221a-c38b-40c9-ad1f-ca18de0fd3d8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:22 embed-certs-531596 crio[840]: time="2025-11-15T10:35:22.872988253Z" level=info msg="Starting container: af3b63daae94cbb15912f2cb245b11cef98c207121ce5451f337c9ad8ce8a63d" id=002d4a5e-eb9d-47a5-9c34-3c01b56c86ef name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:35:22 embed-certs-531596 crio[840]: time="2025-11-15T10:35:22.874744577Z" level=info msg="Started container" PID=1741 containerID=af3b63daae94cbb15912f2cb245b11cef98c207121ce5451f337c9ad8ce8a63d description=kube-system/coredns-66bc5c9577-sl29r/coredns id=002d4a5e-eb9d-47a5-9c34-3c01b56c86ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=837f354c168a4f7cc1c4a263a33a3a6e05164409a2f6383548223a14dee11d54
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.036240382Z" level=info msg="Running pod sandbox: default/busybox/POD" id=093626dd-a828-4c8e-b8b9-2a3cfc3c0783 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.036309443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.052012875Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:872e0faa608e6223245a9a574c7b6c76e067a84352932548e0369a11940fb36b UID:70b4f86d-cd24-4414-8fb3-e393fdc4fbe1 NetNS:/var/run/netns/5471fde0-11e5-4643-b5f6-c89130796dcc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078e80}] Aliases:map[]}"
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.052061103Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.068069191Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:872e0faa608e6223245a9a574c7b6c76e067a84352932548e0369a11940fb36b UID:70b4f86d-cd24-4414-8fb3-e393fdc4fbe1 NetNS:/var/run/netns/5471fde0-11e5-4643-b5f6-c89130796dcc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078e80}] Aliases:map[]}"
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.068259306Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.079484135Z" level=info msg="Ran pod sandbox 872e0faa608e6223245a9a574c7b6c76e067a84352932548e0369a11940fb36b with infra container: default/busybox/POD" id=093626dd-a828-4c8e-b8b9-2a3cfc3c0783 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.080689475Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=58f8b61d-350b-409c-b04a-d68a4feb324c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.080817857Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=58f8b61d-350b-409c-b04a-d68a4feb324c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.080862533Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=58f8b61d-350b-409c-b04a-d68a4feb324c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.082063624Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bed17d9e-76b6-459c-843d-7e326c62777a name=/runtime.v1.ImageService/PullImage
	Nov 15 10:35:26 embed-certs-531596 crio[840]: time="2025-11-15T10:35:26.086133508Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.402109712Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=bed17d9e-76b6-459c-843d-7e326c62777a name=/runtime.v1.ImageService/PullImage
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.403122156Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3d2096f9-cf32-4e52-9f3e-28e6d257ea46 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.404935849Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1e241222-33ac-4aa4-ac34-88633b4fc4ec name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.413044456Z" level=info msg="Creating container: default/busybox/busybox" id=b313656c-f018-44d8-9591-42229816da5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.413181273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.419095036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.419584844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.437784723Z" level=info msg="Created container 32e56511cc9026aab429e875c27732347f912e66b6f2c3c155860baa63fb3bd9: default/busybox/busybox" id=b313656c-f018-44d8-9591-42229816da5c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.438726547Z" level=info msg="Starting container: 32e56511cc9026aab429e875c27732347f912e66b6f2c3c155860baa63fb3bd9" id=60875533-31f7-4c56-8c84-a721d1612fc0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:35:28 embed-certs-531596 crio[840]: time="2025-11-15T10:35:28.440471418Z" level=info msg="Started container" PID=1793 containerID=32e56511cc9026aab429e875c27732347f912e66b6f2c3c155860baa63fb3bd9 description=default/busybox/busybox id=60875533-31f7-4c56-8c84-a721d1612fc0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=872e0faa608e6223245a9a574c7b6c76e067a84352932548e0369a11940fb36b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	32e56511cc902       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   872e0faa608e6       busybox                                      default
	af3b63daae94c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   837f354c168a4       coredns-66bc5c9577-sl29r                     kube-system
	8a7352e964a4f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   70bc814b0afb2       storage-provisioner                          kube-system
	2e2410a640388       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   80093c1fe8a87       kube-proxy-nqfl8                             kube-system
	0a754a3c63b57       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   932e4aeeec4f8       kindnet-9pzmc                                kube-system
	7b2d0e203e0d6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   8a641cd14170c       etcd-embed-certs-531596                      kube-system
	109378cef3442       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   f9d96fe222b14       kube-apiserver-embed-certs-531596            kube-system
	45b59e0daca47       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   ecaa51ed44198       kube-controller-manager-embed-certs-531596   kube-system
	0c12c05fb7725       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   8a487253f720b       kube-scheduler-embed-certs-531596            kube-system
	
	
	==> coredns [af3b63daae94cbb15912f2cb245b11cef98c207121ce5451f337c9ad8ce8a63d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45228 - 57019 "HINFO IN 587694168051203278.4678318692259995435. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.032706807s
	
	
	==> describe nodes <==
	Name:               embed-certs-531596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-531596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=embed-certs-531596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-531596
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:35:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:27 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:27 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:27 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:27 +0000   Sat, 15 Nov 2025 10:35:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-531596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                86513864-e880-4a89-8b90-c692d6bc7e85
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-sl29r                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-531596                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-9pzmc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-531596             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-531596    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-nqfl8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-531596             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node embed-certs-531596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-531596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-531596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-531596 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-531596 event: Registered Node embed-certs-531596 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-531596 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 10:11] overlayfs: idmapped layers are currently not supported
	[Nov15 10:12] overlayfs: idmapped layers are currently not supported
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7b2d0e203e0d6bcd74d5395960aa48a4f5b687456b0d09082e7ba8f74ecd519e] <==
	{"level":"warn","ts":"2025-11-15T10:34:31.847388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:31.865225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:31.881724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:31.898238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:31.914759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:31.931250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:31.953426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:31.969061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:31.982716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:31.998257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.016970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.038858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.051412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.069829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.090423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.119471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.126821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.143470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.158005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.206891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.231854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.253972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.278002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.290809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:34:32.393500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58264","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:35:36 up  5:18,  0 user,  load average: 3.61, 3.46, 2.91
	Linux embed-certs-531596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a754a3c63b57e2480cb04b496d6dd2120eb3b68482b165f4b391a38e3bf8ea0] <==
	I1115 10:34:41.744915       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:34:41.776690       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:34:41.776842       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:34:41.776857       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:34:41.776869       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:34:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:34:41.968550       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:34:41.968568       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:34:41.968576       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:34:41.968900       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:35:11.968250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:35:11.969524       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 10:35:11.969655       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:35:11.969735       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 10:35:13.369698       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:35:13.369806       1 metrics.go:72] Registering metrics
	I1115 10:35:13.370007       1 controller.go:711] "Syncing nftables rules"
	I1115 10:35:21.974131       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:21.974170       1 main.go:301] handling current node
	I1115 10:35:31.969675       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:35:31.969723       1 main.go:301] handling current node
	
	
	==> kube-apiserver [109378cef34420508290f7830a96fda4b59dd0664f422d010b15a23bfdfde4ad] <==
	I1115 10:34:33.462285       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:33.462500       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1115 10:34:33.476316       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1115 10:34:33.476505       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1115 10:34:33.515786       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:33.531515       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:34:33.679574       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:34:34.104403       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:34:34.109211       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:34:34.109295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:34:34.895592       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:34:34.955263       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:34:35.080330       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:34:35.088223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1115 10:34:35.089406       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:34:35.094986       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:34:35.210660       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:34:36.303102       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:34:36.328877       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:34:36.347104       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:34:40.961981       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:34:41.217559       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:41.224262       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:34:41.265004       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1115 10:35:34.917540       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:52472: use of closed network connection
	
	
	==> kube-controller-manager [45b59e0daca479fdb0efc128a9a116f33338ed0c4f5b02135ea79bc07640eb85] <==
	I1115 10:34:40.214182       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:34:40.215706       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:34:40.215813       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:34:40.215940       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:34:40.217100       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:34:40.217742       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:34:40.221275       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:34:40.222605       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:34:40.226108       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:34:40.232489       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-531596" podCIDRs=["10.244.0.0/24"]
	I1115 10:34:40.236172       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:34:40.256718       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:34:40.258334       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:34:40.258728       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:34:40.262750       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:34:40.258888       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:34:40.264513       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:34:40.264931       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:34:40.258916       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:34:40.267818       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:34:40.259112       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:34:40.259124       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:34:40.259173       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:34:40.269336       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:35:25.215906       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2e2410a64038817e933a66b171a7d78dd1f91b4cd763b8f9b4088003f8427890] <==
	I1115 10:34:42.192445       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:34:42.323882       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:34:42.445704       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:34:42.450460       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:34:42.450574       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:34:42.521618       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:34:42.521682       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:34:42.527482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:34:42.527944       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:34:42.527958       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:34:42.534312       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:34:42.534390       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:34:42.534732       1 config.go:200] "Starting service config controller"
	I1115 10:34:42.534777       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:34:42.535171       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:34:42.536463       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:34:42.544896       1 config.go:309] "Starting node config controller"
	I1115 10:34:42.544915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:34:42.544924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:34:42.642625       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:34:42.652288       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:34:42.652336       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0c12c05fb772508d13cbf3ef9fc16f3463c09d22e5353162dc0c1d83a60d54fb] <==
	E1115 10:34:33.389935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:34:33.390024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:34:33.390099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:34:33.390165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:34:33.390228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:34:33.390289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:34:33.390348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:34:33.390405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:34:33.390466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:34:33.390527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:34:33.390585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:34:33.390647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:34:33.390745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:34:33.390814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:34:33.410759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 10:34:34.225924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:34:34.229414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:34:34.269356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:34:34.309101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:34:34.480780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:34:34.500977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:34:34.512408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:34:34.526794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:34:34.835157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1115 10:34:37.637527       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:34:40 embed-certs-531596 kubelet[1316]: I1115 10:34:40.271270    1316 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 10:34:40 embed-certs-531596 kubelet[1316]: I1115 10:34:40.272024    1316 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: I1115 10:34:41.037089    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/32c8087e-941a-4953-ae21-a83d98b0fc8f-kube-proxy\") pod \"kube-proxy-nqfl8\" (UID: \"32c8087e-941a-4953-ae21-a83d98b0fc8f\") " pod="kube-system/kube-proxy-nqfl8"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: I1115 10:34:41.037376    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32c8087e-941a-4953-ae21-a83d98b0fc8f-xtables-lock\") pod \"kube-proxy-nqfl8\" (UID: \"32c8087e-941a-4953-ae21-a83d98b0fc8f\") " pod="kube-system/kube-proxy-nqfl8"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: I1115 10:34:41.037487    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32c8087e-941a-4953-ae21-a83d98b0fc8f-lib-modules\") pod \"kube-proxy-nqfl8\" (UID: \"32c8087e-941a-4953-ae21-a83d98b0fc8f\") " pod="kube-system/kube-proxy-nqfl8"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: I1115 10:34:41.037586    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g86sr\" (UniqueName: \"kubernetes.io/projected/32c8087e-941a-4953-ae21-a83d98b0fc8f-kube-api-access-g86sr\") pod \"kube-proxy-nqfl8\" (UID: \"32c8087e-941a-4953-ae21-a83d98b0fc8f\") " pod="kube-system/kube-proxy-nqfl8"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: I1115 10:34:41.037785    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22-lib-modules\") pod \"kindnet-9pzmc\" (UID: \"cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22\") " pod="kube-system/kindnet-9pzmc"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: I1115 10:34:41.037890    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjcgp\" (UniqueName: \"kubernetes.io/projected/cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22-kube-api-access-rjcgp\") pod \"kindnet-9pzmc\" (UID: \"cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22\") " pod="kube-system/kindnet-9pzmc"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: I1115 10:34:41.037984    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22-cni-cfg\") pod \"kindnet-9pzmc\" (UID: \"cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22\") " pod="kube-system/kindnet-9pzmc"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: I1115 10:34:41.038180    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22-xtables-lock\") pod \"kindnet-9pzmc\" (UID: \"cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22\") " pod="kube-system/kindnet-9pzmc"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: I1115 10:34:41.150001    1316 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: W1115 10:34:41.393712    1316 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/crio-80093c1fe8a8725f0e6c1e365f140a6edb26dfb69493e63f871b0ae12d19534d WatchSource:0}: Error finding container 80093c1fe8a8725f0e6c1e365f140a6edb26dfb69493e63f871b0ae12d19534d: Status 404 returned error can't find the container with id 80093c1fe8a8725f0e6c1e365f140a6edb26dfb69493e63f871b0ae12d19534d
	Nov 15 10:34:41 embed-certs-531596 kubelet[1316]: W1115 10:34:41.404939    1316 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/crio-932e4aeeec4f8af4518ed8bf16b10676b4a614dcb3e5660a0c9ac9a9ebfd14cd WatchSource:0}: Error finding container 932e4aeeec4f8af4518ed8bf16b10676b4a614dcb3e5660a0c9ac9a9ebfd14cd: Status 404 returned error can't find the container with id 932e4aeeec4f8af4518ed8bf16b10676b4a614dcb3e5660a0c9ac9a9ebfd14cd
	Nov 15 10:34:42 embed-certs-531596 kubelet[1316]: I1115 10:34:42.420249    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9pzmc" podStartSLOduration=2.420231874 podStartE2EDuration="2.420231874s" podCreationTimestamp="2025-11-15 10:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:42.420027244 +0000 UTC m=+6.256708545" watchObservedRunningTime="2025-11-15 10:34:42.420231874 +0000 UTC m=+6.256913184"
	Nov 15 10:34:42 embed-certs-531596 kubelet[1316]: I1115 10:34:42.457728    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nqfl8" podStartSLOduration=2.457709265 podStartE2EDuration="2.457709265s" podCreationTimestamp="2025-11-15 10:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:34:42.456921537 +0000 UTC m=+6.293602838" watchObservedRunningTime="2025-11-15 10:34:42.457709265 +0000 UTC m=+6.294390550"
	Nov 15 10:35:22 embed-certs-531596 kubelet[1316]: I1115 10:35:22.346514    1316 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:35:22 embed-certs-531596 kubelet[1316]: I1115 10:35:22.470108    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sc87\" (UniqueName: \"kubernetes.io/projected/2feb3053-812e-439e-b003-38aa75d3cf38-kube-api-access-6sc87\") pod \"storage-provisioner\" (UID: \"2feb3053-812e-439e-b003-38aa75d3cf38\") " pod="kube-system/storage-provisioner"
	Nov 15 10:35:22 embed-certs-531596 kubelet[1316]: I1115 10:35:22.470418    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a3916e-f489-4ca0-aa5f-05b2370df255-config-volume\") pod \"coredns-66bc5c9577-sl29r\" (UID: \"01a3916e-f489-4ca0-aa5f-05b2370df255\") " pod="kube-system/coredns-66bc5c9577-sl29r"
	Nov 15 10:35:22 embed-certs-531596 kubelet[1316]: I1115 10:35:22.470527    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wph48\" (UniqueName: \"kubernetes.io/projected/01a3916e-f489-4ca0-aa5f-05b2370df255-kube-api-access-wph48\") pod \"coredns-66bc5c9577-sl29r\" (UID: \"01a3916e-f489-4ca0-aa5f-05b2370df255\") " pod="kube-system/coredns-66bc5c9577-sl29r"
	Nov 15 10:35:22 embed-certs-531596 kubelet[1316]: I1115 10:35:22.470638    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2feb3053-812e-439e-b003-38aa75d3cf38-tmp\") pod \"storage-provisioner\" (UID: \"2feb3053-812e-439e-b003-38aa75d3cf38\") " pod="kube-system/storage-provisioner"
	Nov 15 10:35:22 embed-certs-531596 kubelet[1316]: W1115 10:35:22.765590    1316 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/crio-837f354c168a4f7cc1c4a263a33a3a6e05164409a2f6383548223a14dee11d54 WatchSource:0}: Error finding container 837f354c168a4f7cc1c4a263a33a3a6e05164409a2f6383548223a14dee11d54: Status 404 returned error can't find the container with id 837f354c168a4f7cc1c4a263a33a3a6e05164409a2f6383548223a14dee11d54
	Nov 15 10:35:23 embed-certs-531596 kubelet[1316]: I1115 10:35:23.557800    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sl29r" podStartSLOduration=42.557769209 podStartE2EDuration="42.557769209s" podCreationTimestamp="2025-11-15 10:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:35:23.534553806 +0000 UTC m=+47.371235099" watchObservedRunningTime="2025-11-15 10:35:23.557769209 +0000 UTC m=+47.394450494"
	Nov 15 10:35:25 embed-certs-531596 kubelet[1316]: I1115 10:35:25.722867    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.722832335 podStartE2EDuration="43.722832335s" podCreationTimestamp="2025-11-15 10:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:35:23.610599082 +0000 UTC m=+47.447280376" watchObservedRunningTime="2025-11-15 10:35:25.722832335 +0000 UTC m=+49.559513644"
	Nov 15 10:35:25 embed-certs-531596 kubelet[1316]: I1115 10:35:25.795578    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvq72\" (UniqueName: \"kubernetes.io/projected/70b4f86d-cd24-4414-8fb3-e393fdc4fbe1-kube-api-access-mvq72\") pod \"busybox\" (UID: \"70b4f86d-cd24-4414-8fb3-e393fdc4fbe1\") " pod="default/busybox"
	Nov 15 10:35:26 embed-certs-531596 kubelet[1316]: W1115 10:35:26.078332    1316 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/crio-872e0faa608e6223245a9a574c7b6c76e067a84352932548e0369a11940fb36b WatchSource:0}: Error finding container 872e0faa608e6223245a9a574c7b6c76e067a84352932548e0369a11940fb36b: Status 404 returned error can't find the container with id 872e0faa608e6223245a9a574c7b6c76e067a84352932548e0369a11940fb36b
	
	
	==> storage-provisioner [8a7352e964a4f18fef195d5cbcef7abc65ef257008bc98ca04fece141fa008f4] <==
	I1115 10:35:22.843270       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:35:22.856513       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:35:22.856574       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:35:22.861971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:22.907935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:35:22.911363       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:35:22.911896       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-531596_bd63ff04-7d65-4dab-bd3f-926e67eaaf82!
	I1115 10:35:22.911944       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"50582e2f-871b-4ae3-bc92-dc6483b1130c", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-531596_bd63ff04-7d65-4dab-bd3f-926e67eaaf82 became leader
	W1115 10:35:22.928346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:22.939962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:35:23.012770       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-531596_bd63ff04-7d65-4dab-bd3f-926e67eaaf82!
	W1115 10:35:24.943904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:24.951542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:26.954556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:26.961859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:28.964776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:28.969254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:30.972109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:30.977432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:32.980908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:32.988356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:34.991238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:34.997153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-531596 -n embed-certs-531596
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-531596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-907610 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-907610 --alsologtostderr -v=1: exit status 80 (2.336036746s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-907610 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:36:12.898645  713936 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:12.898802  713936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:12.898824  713936 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:12.898844  713936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:12.899119  713936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:36:12.899415  713936 out.go:368] Setting JSON to false
	I1115 10:36:12.899482  713936 mustload.go:66] Loading cluster: no-preload-907610
	I1115 10:36:12.899889  713936 config.go:182] Loaded profile config "no-preload-907610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:12.900527  713936 cli_runner.go:164] Run: docker container inspect no-preload-907610 --format={{.State.Status}}
	I1115 10:36:12.918352  713936 host.go:66] Checking if "no-preload-907610" exists ...
	I1115 10:36:12.918708  713936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:13.002491  713936 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-15 10:36:12.98761863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:36:13.003307  713936 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-907610 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:36:13.006870  713936 out.go:179] * Pausing node no-preload-907610 ... 
	I1115 10:36:13.011080  713936 host.go:66] Checking if "no-preload-907610" exists ...
	I1115 10:36:13.011444  713936 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:13.011499  713936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-907610
	I1115 10:36:13.034079  713936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/no-preload-907610/id_rsa Username:docker}
	I1115 10:36:13.144682  713936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:13.160797  713936 pause.go:52] kubelet running: true
	I1115 10:36:13.160864  713936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:13.585912  713936 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:13.585991  713936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:13.736059  713936 cri.go:89] found id: "c4065836497343e6d9303217966ac46ac647fdbce23f7e49368d3880af4e8fc6"
	I1115 10:36:13.736124  713936 cri.go:89] found id: "3895648769f9b491dcabadc00916d0ddad17303c433290e0f2ea5f58450bca76"
	I1115 10:36:13.736145  713936 cri.go:89] found id: "e83f82003a36807fb00707347761aad18e6318b7683492f9be8eb5f018407286"
	I1115 10:36:13.736169  713936 cri.go:89] found id: "c244dd133e1dd4c2d238dd64ce7a3cf6ebc1eab1f51072dc25ab2e89edea3d0a"
	I1115 10:36:13.736204  713936 cri.go:89] found id: "718b69e5cb82f61c6caf5f3e606e2ec1aa724f90f8a35bfe302191fef4b322d9"
	I1115 10:36:13.736236  713936 cri.go:89] found id: "e39b03ee83b22e578b9f5605c3bd8e0ef77ee33deddbb7aa1624c005fece9124"
	I1115 10:36:13.736255  713936 cri.go:89] found id: "fbba8e0ca18f1bb361aade61f62504b671e8e02da9e21dc771c669d6472159f2"
	I1115 10:36:13.736274  713936 cri.go:89] found id: "2595af5ed79b0d008d0a4a9885bb6bb2d922c8e0fc4984e57ea1078e606230d7"
	I1115 10:36:13.736298  713936 cri.go:89] found id: "aa8b90296193a6708fd35513c6745262f53b36234f1f69ebb1d6aee50a60dfcd"
	I1115 10:36:13.736336  713936 cri.go:89] found id: "32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425"
	I1115 10:36:13.736355  713936 cri.go:89] found id: "629e55498715b583593925f583bf65140ba0524b62727038c86feededab7c232"
	I1115 10:36:13.736378  713936 cri.go:89] found id: ""
	I1115 10:36:13.736457  713936 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:13.760009  713936 retry.go:31] will retry after 373.370813ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:13Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:14.134324  713936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:14.150117  713936 pause.go:52] kubelet running: false
	I1115 10:36:14.150246  713936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:14.402997  713936 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:14.403129  713936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:14.520530  713936 cri.go:89] found id: "c4065836497343e6d9303217966ac46ac647fdbce23f7e49368d3880af4e8fc6"
	I1115 10:36:14.520593  713936 cri.go:89] found id: "3895648769f9b491dcabadc00916d0ddad17303c433290e0f2ea5f58450bca76"
	I1115 10:36:14.520614  713936 cri.go:89] found id: "e83f82003a36807fb00707347761aad18e6318b7683492f9be8eb5f018407286"
	I1115 10:36:14.520638  713936 cri.go:89] found id: "c244dd133e1dd4c2d238dd64ce7a3cf6ebc1eab1f51072dc25ab2e89edea3d0a"
	I1115 10:36:14.520676  713936 cri.go:89] found id: "718b69e5cb82f61c6caf5f3e606e2ec1aa724f90f8a35bfe302191fef4b322d9"
	I1115 10:36:14.520704  713936 cri.go:89] found id: "e39b03ee83b22e578b9f5605c3bd8e0ef77ee33deddbb7aa1624c005fece9124"
	I1115 10:36:14.520727  713936 cri.go:89] found id: "fbba8e0ca18f1bb361aade61f62504b671e8e02da9e21dc771c669d6472159f2"
	I1115 10:36:14.520751  713936 cri.go:89] found id: "2595af5ed79b0d008d0a4a9885bb6bb2d922c8e0fc4984e57ea1078e606230d7"
	I1115 10:36:14.520784  713936 cri.go:89] found id: "aa8b90296193a6708fd35513c6745262f53b36234f1f69ebb1d6aee50a60dfcd"
	I1115 10:36:14.520824  713936 cri.go:89] found id: "32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425"
	I1115 10:36:14.520846  713936 cri.go:89] found id: "629e55498715b583593925f583bf65140ba0524b62727038c86feededab7c232"
	I1115 10:36:14.520870  713936 cri.go:89] found id: ""
	I1115 10:36:14.520949  713936 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:14.535729  713936 retry.go:31] will retry after 229.920284ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:14Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:36:14.766062  713936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:14.780202  713936 pause.go:52] kubelet running: false
	I1115 10:36:14.780339  713936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:36:15.044642  713936 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:36:15.044766  713936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:36:15.142425  713936 cri.go:89] found id: "c4065836497343e6d9303217966ac46ac647fdbce23f7e49368d3880af4e8fc6"
	I1115 10:36:15.142501  713936 cri.go:89] found id: "3895648769f9b491dcabadc00916d0ddad17303c433290e0f2ea5f58450bca76"
	I1115 10:36:15.142521  713936 cri.go:89] found id: "e83f82003a36807fb00707347761aad18e6318b7683492f9be8eb5f018407286"
	I1115 10:36:15.142545  713936 cri.go:89] found id: "c244dd133e1dd4c2d238dd64ce7a3cf6ebc1eab1f51072dc25ab2e89edea3d0a"
	I1115 10:36:15.142577  713936 cri.go:89] found id: "718b69e5cb82f61c6caf5f3e606e2ec1aa724f90f8a35bfe302191fef4b322d9"
	I1115 10:36:15.142603  713936 cri.go:89] found id: "e39b03ee83b22e578b9f5605c3bd8e0ef77ee33deddbb7aa1624c005fece9124"
	I1115 10:36:15.142626  713936 cri.go:89] found id: "fbba8e0ca18f1bb361aade61f62504b671e8e02da9e21dc771c669d6472159f2"
	I1115 10:36:15.142649  713936 cri.go:89] found id: "2595af5ed79b0d008d0a4a9885bb6bb2d922c8e0fc4984e57ea1078e606230d7"
	I1115 10:36:15.142683  713936 cri.go:89] found id: "aa8b90296193a6708fd35513c6745262f53b36234f1f69ebb1d6aee50a60dfcd"
	I1115 10:36:15.142711  713936 cri.go:89] found id: "32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425"
	I1115 10:36:15.142735  713936 cri.go:89] found id: "629e55498715b583593925f583bf65140ba0524b62727038c86feededab7c232"
	I1115 10:36:15.142758  713936 cri.go:89] found id: ""
	I1115 10:36:15.142851  713936 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:36:15.162836  713936 out.go:203] 
	W1115 10:36:15.165864  713936 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:36:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:36:15.165890  713936 out.go:285] * 
	* 
	W1115 10:36:15.174465  713936 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:36:15.177587  713936 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-907610 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-907610
helpers_test.go:243: (dbg) docker inspect no-preload-907610:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe",
	        "Created": "2025-11-15T10:33:27.637520569Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 708988,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:35:05.169391832Z",
	            "FinishedAt": "2025-11-15T10:35:04.329038782Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/hosts",
	        "LogPath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe-json.log",
	        "Name": "/no-preload-907610",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-907610:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-907610",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe",
	                "LowerDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-907610",
	                "Source": "/var/lib/docker/volumes/no-preload-907610/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-907610",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-907610",
	                "name.minikube.sigs.k8s.io": "no-preload-907610",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfc3e0964586adb46e3f75fe61403239dfc2f77d70b983e01e41a0b1d17ae6b6",
	            "SandboxKey": "/var/run/docker/netns/bfc3e0964586",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-907610": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:c2:11:8b:05:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0446e1129b53a450726fcb48f165692a586d6e4eabe7e4a70c1e31a89bd483dd",
	                    "EndpointID": "88cf7b02221aed372985d73af2036ef968dd0edb68682ec447cf6b77cf49ba76",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-907610",
	                        "10054bd2292b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-907610 -n no-preload-907610
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-907610 -n no-preload-907610: exit status 2 (461.315978ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-907610 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-907610 logs -n 25: (1.743075113s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-115480 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-115480    │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-115480                                                                                                                                                                                                                        │ cert-options-115480    │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-448285 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-448285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ image   │ old-k8s-version-448285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-448285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-845026 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-845026                                                                                                                                                                                                                     │ cert-expiration-845026 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ stop    │ -p no-preload-907610 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p no-preload-907610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-531596 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ image   │ no-preload-907610 image list --format=json                                                                                                                                                                                                    │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:35:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:35:49.753036  711801 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:49.753219  711801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:49.753253  711801 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:49.753279  711801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:49.753575  711801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:35:49.754014  711801 out.go:368] Setting JSON to false
	I1115 10:35:49.755031  711801 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19101,"bootTime":1763183849,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:35:49.755125  711801 start.go:143] virtualization:  
	I1115 10:35:49.758095  711801 out.go:179] * [embed-certs-531596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:35:49.762002  711801 notify.go:221] Checking for updates...
	I1115 10:35:49.762566  711801 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:35:49.765724  711801 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:35:49.768730  711801 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:35:49.771575  711801 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:35:49.774410  711801 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:35:49.777254  711801 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1115 10:35:44.898974  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:47.398684  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:49.399215  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	I1115 10:35:49.780710  711801 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:49.781302  711801 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:35:49.807520  711801 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:35:49.807628  711801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:49.870303  711801 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:35:49.861306964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:35:49.870411  711801 docker.go:319] overlay module found
	I1115 10:35:49.874779  711801 out.go:179] * Using the docker driver based on existing profile
	I1115 10:35:49.877664  711801 start.go:309] selected driver: docker
	I1115 10:35:49.877683  711801 start.go:930] validating driver "docker" against &{Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:49.877793  711801 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:35:49.878510  711801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:49.936024  711801 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:35:49.927333597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:35:49.936370  711801 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:49.936401  711801 cni.go:84] Creating CNI manager for ""
	I1115 10:35:49.936460  711801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:49.936496  711801 start.go:353] cluster config:
	{Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:49.939616  711801 out.go:179] * Starting "embed-certs-531596" primary control-plane node in "embed-certs-531596" cluster
	I1115 10:35:49.942367  711801 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:35:49.945220  711801 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:35:49.948096  711801 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:35:49.948277  711801 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:49.948305  711801 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:35:49.948313  711801 cache.go:65] Caching tarball of preloaded images
	I1115 10:35:49.948380  711801 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:35:49.948394  711801 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:35:49.948507  711801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/config.json ...
	I1115 10:35:49.974085  711801 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:35:49.974109  711801 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:35:49.974123  711801 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:35:49.974146  711801 start.go:360] acquireMachinesLock for embed-certs-531596: {Name:mk92715fcdfed9f5936819aaa5d8bdc4948b9228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:49.974213  711801 start.go:364] duration metric: took 37.504µs to acquireMachinesLock for "embed-certs-531596"
	I1115 10:35:49.974238  711801 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:35:49.974248  711801 fix.go:54] fixHost starting: 
	I1115 10:35:49.974495  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:49.991856  711801 fix.go:112] recreateIfNeeded on embed-certs-531596: state=Stopped err=<nil>
	W1115 10:35:49.991888  711801 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:35:49.995175  711801 out.go:252] * Restarting existing docker container for "embed-certs-531596" ...
	I1115 10:35:49.995252  711801 cli_runner.go:164] Run: docker start embed-certs-531596
	I1115 10:35:50.279857  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:50.305476  711801 kic.go:430] container "embed-certs-531596" state is running.
	I1115 10:35:50.305979  711801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-531596
	I1115 10:35:50.330837  711801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/config.json ...
	I1115 10:35:50.331070  711801 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:50.331144  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:50.354743  711801 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:50.355120  711801 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 10:35:50.355129  711801 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:50.356010  711801 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:35:53.509443  711801 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-531596
	
	I1115 10:35:53.509509  711801 ubuntu.go:182] provisioning hostname "embed-certs-531596"
	I1115 10:35:53.509634  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:53.528948  711801 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.529269  711801 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 10:35:53.529287  711801 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-531596 && echo "embed-certs-531596" | sudo tee /etc/hostname
	I1115 10:35:53.686496  711801 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-531596
	
	I1115 10:35:53.686600  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:53.706020  711801 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.706354  711801 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 10:35:53.706380  711801 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-531596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-531596/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-531596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:53.862003  711801 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:53.862036  711801 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:35:53.862083  711801 ubuntu.go:190] setting up certificates
	I1115 10:35:53.862107  711801 provision.go:84] configureAuth start
	I1115 10:35:53.862178  711801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-531596
	I1115 10:35:53.880802  711801 provision.go:143] copyHostCerts
	I1115 10:35:53.880870  711801 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:35:53.880886  711801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:35:53.880979  711801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:35:53.881100  711801 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:35:53.881111  711801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:35:53.881140  711801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:35:53.881215  711801 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:35:53.881227  711801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:35:53.881262  711801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:35:53.881329  711801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.embed-certs-531596 san=[127.0.0.1 192.168.76.2 embed-certs-531596 localhost minikube]
	I1115 10:35:54.508503  711801 provision.go:177] copyRemoteCerts
	I1115 10:35:54.508583  711801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:54.508632  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:54.526517  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:54.633375  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:54.651569  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:35:54.669746  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:35:54.688446  711801 provision.go:87] duration metric: took 826.311209ms to configureAuth
	I1115 10:35:54.688528  711801 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:54.688758  711801 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:54.688896  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:54.705949  711801 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:54.706254  711801 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 10:35:54.706272  711801 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1115 10:35:51.898281  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:53.898722  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	I1115 10:35:55.064245  711801 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:55.064270  711801 machine.go:97] duration metric: took 4.733182889s to provisionDockerMachine
	I1115 10:35:55.064282  711801 start.go:293] postStartSetup for "embed-certs-531596" (driver="docker")
	I1115 10:35:55.064293  711801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:55.064366  711801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:55.064411  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:55.091550  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:55.200062  711801 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:55.203449  711801 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:55.203482  711801 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:55.203494  711801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:35:55.203547  711801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:35:55.203627  711801 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:35:55.203733  711801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:55.211123  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:35:55.229848  711801 start.go:296] duration metric: took 165.551105ms for postStartSetup
	I1115 10:35:55.229928  711801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:55.229989  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:55.246572  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:55.350715  711801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:55.355585  711801 fix.go:56] duration metric: took 5.381329888s for fixHost
	I1115 10:35:55.355611  711801 start.go:83] releasing machines lock for "embed-certs-531596", held for 5.381385017s
	I1115 10:35:55.355694  711801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-531596
	I1115 10:35:55.372332  711801 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:55.372388  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:55.372394  711801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:55.372458  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:55.389721  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:55.404877  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:55.586955  711801 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:55.593433  711801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:55.628333  711801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:55.633154  711801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:55.633226  711801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:55.640879  711801 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:55.640901  711801 start.go:496] detecting cgroup driver to use...
	I1115 10:35:55.640931  711801 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:55.640977  711801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:55.656896  711801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:55.669739  711801 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:55.669806  711801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:55.685516  711801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:55.698674  711801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:55.812679  711801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:55.944477  711801 docker.go:234] disabling docker service ...
	I1115 10:35:55.944583  711801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:55.961018  711801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:55.975907  711801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:56.100147  711801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:56.218495  711801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:56.232246  711801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:56.246593  711801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:56.246667  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.255799  711801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:56.255879  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.265504  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.275049  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.284020  711801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:56.292050  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.300463  711801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.308798  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.318343  711801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:56.326133  711801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:56.335144  711801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:56.451080  711801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:56.586083  711801 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:56.586193  711801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:56.590671  711801 start.go:564] Will wait 60s for crictl version
	I1115 10:35:56.590784  711801 ssh_runner.go:195] Run: which crictl
	I1115 10:35:56.594478  711801 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:56.619885  711801 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:56.620047  711801 ssh_runner.go:195] Run: crio --version
	I1115 10:35:56.652961  711801 ssh_runner.go:195] Run: crio --version
	I1115 10:35:56.695739  711801 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:35:56.698523  711801 cli_runner.go:164] Run: docker network inspect embed-certs-531596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:56.714297  711801 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:56.718219  711801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:56.727646  711801 kubeadm.go:884] updating cluster {Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:56.727769  711801 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:56.727838  711801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:56.763501  711801 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:56.763525  711801 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:35:56.763580  711801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:56.796848  711801 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:56.796874  711801 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:56.796883  711801 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:56.796976  711801 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-531596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:56.797071  711801 ssh_runner.go:195] Run: crio config
	I1115 10:35:56.865017  711801 cni.go:84] Creating CNI manager for ""
	I1115 10:35:56.865042  711801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:56.865063  711801 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:56.865113  711801 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-531596 NodeName:embed-certs-531596 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:56.865301  711801 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-531596"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:56.865411  711801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:56.873404  711801 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:56.873487  711801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:56.880915  711801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:35:56.894166  711801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:56.908437  711801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:35:56.921152  711801 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:56.924728  711801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:56.934279  711801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:57.073637  711801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:57.092585  711801 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596 for IP: 192.168.76.2
	I1115 10:35:57.092694  711801 certs.go:195] generating shared ca certs ...
	I1115 10:35:57.092767  711801 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:57.092975  711801 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:35:57.093076  711801 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:35:57.093134  711801 certs.go:257] generating profile certs ...
	I1115 10:35:57.093282  711801 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/client.key
	I1115 10:35:57.093426  711801 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key.8b8c468c
	I1115 10:35:57.093518  711801 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.key
	I1115 10:35:57.093727  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:35:57.093842  711801 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:57.093885  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:35:57.093937  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:57.094019  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:57.094083  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:35:57.094186  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:35:57.094999  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:57.122829  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:57.144957  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:57.171048  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:57.206105  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:35:57.232346  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:35:57.255195  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:57.283898  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:35:57.306687  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:57.327895  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:35:57.346579  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:35:57.367062  711801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:57.380880  711801 ssh_runner.go:195] Run: openssl version
	I1115 10:35:57.387280  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:57.401443  711801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:57.405280  711801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:57.405364  711801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:57.449591  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:57.457790  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:35:57.466846  711801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:35:57.470583  711801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:35:57.470654  711801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:35:57.516332  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:57.525929  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:35:57.535407  711801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:35:57.539534  711801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:35:57.539628  711801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:35:57.581145  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:57.589917  711801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:57.593828  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:57.634282  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:57.676498  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:57.721981  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:57.767529  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:57.812087  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:57.869941  711801 kubeadm.go:401] StartCluster: {Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:57.870073  711801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:57.870195  711801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:57.934016  711801 cri.go:89] found id: "fb296649ab4b3d918ce7358336368732c13315596a50250e0c726940c17152bc"
	I1115 10:35:57.934088  711801 cri.go:89] found id: "f50de0346fbea41f639b33ec5f1eff63239807868eacf5aad15a6baeb58568df"
	I1115 10:35:57.934123  711801 cri.go:89] found id: "8c893770fdb03a4e37b1d08381d9addac2d7610c1a9489454b5c254477699b17"
	I1115 10:35:57.934149  711801 cri.go:89] found id: ""
	I1115 10:35:57.934233  711801 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:57.950151  711801 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:57Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:57.950282  711801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:57.973074  711801 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:57.973143  711801 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:57.973230  711801 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:57.991025  711801 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:57.991706  711801 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-531596" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:35:57.992039  711801 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-514793/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-531596" cluster setting kubeconfig missing "embed-certs-531596" context setting]
	I1115 10:35:57.992590  711801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:57.994262  711801 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:58.006883  711801 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 10:35:58.006921  711801 kubeadm.go:602] duration metric: took 33.757665ms to restartPrimaryControlPlane
	I1115 10:35:58.006932  711801 kubeadm.go:403] duration metric: took 137.003937ms to StartCluster
	I1115 10:35:58.006951  711801 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:58.007028  711801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:35:58.008692  711801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:58.009307  711801 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:58.010357  711801 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:58.010605  711801 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:58.010769  711801 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-531596"
	I1115 10:35:58.010788  711801 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-531596"
	W1115 10:35:58.010795  711801 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:58.010820  711801 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:35:58.011368  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:58.011695  711801 addons.go:70] Setting dashboard=true in profile "embed-certs-531596"
	I1115 10:35:58.011749  711801 addons.go:239] Setting addon dashboard=true in "embed-certs-531596"
	W1115 10:35:58.011775  711801 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:58.011825  711801 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:35:58.012358  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:58.015739  711801 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:58.016258  711801 addons.go:70] Setting default-storageclass=true in profile "embed-certs-531596"
	I1115 10:35:58.016284  711801 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-531596"
	I1115 10:35:58.016685  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:58.033643  711801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:58.057858  711801 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:58.061136  711801 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:35:58.066942  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:58.066975  711801 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:58.067058  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:58.089946  711801 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:58.093259  711801 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:58.093281  711801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:58.093357  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:58.109773  711801 addons.go:239] Setting addon default-storageclass=true in "embed-certs-531596"
	W1115 10:35:58.109798  711801 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:58.109823  711801 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:35:58.110297  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:58.125827  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:58.157837  711801 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:58.157859  711801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:58.157919  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:58.165440  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:58.188060  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:58.351439  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:58.351512  711801 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:58.399704  711801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:58.426979  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:58.427054  711801 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:58.437121  711801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:58.443794  711801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:58.494188  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:58.494216  711801 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:58.583926  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:58.583950  711801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:58.659915  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:58.659942  711801 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:58.685967  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:58.686042  711801 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:58.706238  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:58.706312  711801 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:58.723939  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:58.724010  711801 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:58.740855  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:58.740928  711801 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:58.756628  711801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1115 10:35:55.899411  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:57.900621  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	I1115 10:35:59.398433  708863 pod_ready.go:94] pod "coredns-66bc5c9577-ql8g6" is "Ready"
	I1115 10:35:59.398458  708863 pod_ready.go:86] duration metric: took 41.005189392s for pod "coredns-66bc5c9577-ql8g6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.401128  708863 pod_ready.go:83] waiting for pod "etcd-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.407266  708863 pod_ready.go:94] pod "etcd-no-preload-907610" is "Ready"
	I1115 10:35:59.407290  708863 pod_ready.go:86] duration metric: took 6.100909ms for pod "etcd-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.409671  708863 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.416294  708863 pod_ready.go:94] pod "kube-apiserver-no-preload-907610" is "Ready"
	I1115 10:35:59.416317  708863 pod_ready.go:86] duration metric: took 6.627827ms for pod "kube-apiserver-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.419547  708863 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.597026  708863 pod_ready.go:94] pod "kube-controller-manager-no-preload-907610" is "Ready"
	I1115 10:35:59.597108  708863 pod_ready.go:86] duration metric: took 177.539324ms for pod "kube-controller-manager-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.797018  708863 pod_ready.go:83] waiting for pod "kube-proxy-rh8h4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:00.197947  708863 pod_ready.go:94] pod "kube-proxy-rh8h4" is "Ready"
	I1115 10:36:00.197991  708863 pod_ready.go:86] duration metric: took 400.890081ms for pod "kube-proxy-rh8h4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:00.398221  708863 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:00.796981  708863 pod_ready.go:94] pod "kube-scheduler-no-preload-907610" is "Ready"
	I1115 10:36:00.797011  708863 pod_ready.go:86] duration metric: took 398.714167ms for pod "kube-scheduler-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:00.797024  708863 pod_ready.go:40] duration metric: took 42.407484886s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:00.889204  708863 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:36:00.892291  708863 out.go:179] * Done! kubectl is now configured to use "no-preload-907610" cluster and "default" namespace by default
	I1115 10:36:04.516591  711801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.116806352s)
	I1115 10:36:04.516652  711801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.079467396s)
	I1115 10:36:04.516964  711801 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.073134256s)
	I1115 10:36:04.516997  711801 node_ready.go:35] waiting up to 6m0s for node "embed-certs-531596" to be "Ready" ...
	I1115 10:36:04.543275  711801 node_ready.go:49] node "embed-certs-531596" is "Ready"
	I1115 10:36:04.543356  711801 node_ready.go:38] duration metric: took 26.345801ms for node "embed-certs-531596" to be "Ready" ...
	I1115 10:36:04.543388  711801 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:04.543553  711801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:04.599143  711801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.842417796s)
	I1115 10:36:04.599404  711801 api_server.go:72] duration metric: took 6.590057182s to wait for apiserver process to appear ...
	I1115 10:36:04.599431  711801 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:04.599449  711801 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:36:04.602438  711801 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-531596 addons enable metrics-server
	
	I1115 10:36:04.605324  711801 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1115 10:36:04.608722  711801 addons.go:515] duration metric: took 6.598103596s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 10:36:04.609404  711801 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:04.609430  711801 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:05.099826  711801 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:36:05.147211  711801 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:36:05.150787  711801 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:05.150820  711801 api_server.go:131] duration metric: took 551.381846ms to wait for apiserver health ...
	I1115 10:36:05.150830  711801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:05.175106  711801 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:05.175143  711801 system_pods.go:61] "coredns-66bc5c9577-sl29r" [01a3916e-f489-4ca0-aa5f-05b2370df255] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:05.175153  711801 system_pods.go:61] "etcd-embed-certs-531596" [0c093954-3401-4c0a-8691-4d5253364a1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:05.175161  711801 system_pods.go:61] "kindnet-9pzmc" [cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 10:36:05.175169  711801 system_pods.go:61] "kube-apiserver-embed-certs-531596" [ec3eab77-05c0-40b7-b2ba-8610e4e2f33c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:05.175177  711801 system_pods.go:61] "kube-controller-manager-embed-certs-531596" [2de4cb12-0355-48ca-8288-595acb3acfc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:05.175183  711801 system_pods.go:61] "kube-proxy-nqfl8" [32c8087e-941a-4953-ae21-a83d98b0fc8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:36:05.175195  711801 system_pods.go:61] "kube-scheduler-embed-certs-531596" [dfe838b5-ff4d-45ba-b012-1a8e6c155b63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:05.175201  711801 system_pods.go:61] "storage-provisioner" [2feb3053-812e-439e-b003-38aa75d3cf38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:05.175207  711801 system_pods.go:74] duration metric: took 24.371054ms to wait for pod list to return data ...
	I1115 10:36:05.175215  711801 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:05.240808  711801 default_sa.go:45] found service account: "default"
	I1115 10:36:05.240837  711801 default_sa.go:55] duration metric: took 65.61505ms for default service account to be created ...
	I1115 10:36:05.240863  711801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:05.276191  711801 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:05.276294  711801 system_pods.go:89] "coredns-66bc5c9577-sl29r" [01a3916e-f489-4ca0-aa5f-05b2370df255] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:05.276321  711801 system_pods.go:89] "etcd-embed-certs-531596" [0c093954-3401-4c0a-8691-4d5253364a1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:05.276362  711801 system_pods.go:89] "kindnet-9pzmc" [cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 10:36:05.276395  711801 system_pods.go:89] "kube-apiserver-embed-certs-531596" [ec3eab77-05c0-40b7-b2ba-8610e4e2f33c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:05.276438  711801 system_pods.go:89] "kube-controller-manager-embed-certs-531596" [2de4cb12-0355-48ca-8288-595acb3acfc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:05.276467  711801 system_pods.go:89] "kube-proxy-nqfl8" [32c8087e-941a-4953-ae21-a83d98b0fc8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:36:05.276496  711801 system_pods.go:89] "kube-scheduler-embed-certs-531596" [dfe838b5-ff4d-45ba-b012-1a8e6c155b63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:05.276537  711801 system_pods.go:89] "storage-provisioner" [2feb3053-812e-439e-b003-38aa75d3cf38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:05.276565  711801 system_pods.go:126] duration metric: took 35.669447ms to wait for k8s-apps to be running ...
	I1115 10:36:05.276590  711801 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:05.276696  711801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:05.294553  711801 system_svc.go:56] duration metric: took 17.944985ms WaitForService to wait for kubelet
	I1115 10:36:05.294637  711801 kubeadm.go:587] duration metric: took 7.285276759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:05.294672  711801 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:05.308114  711801 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:36:05.308196  711801 node_conditions.go:123] node cpu capacity is 2
	I1115 10:36:05.308234  711801 node_conditions.go:105] duration metric: took 13.529297ms to run NodePressure ...
	I1115 10:36:05.308279  711801 start.go:242] waiting for startup goroutines ...
	I1115 10:36:05.308305  711801 start.go:247] waiting for cluster config update ...
	I1115 10:36:05.308330  711801 start.go:256] writing updated cluster config ...
	I1115 10:36:05.308678  711801 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:05.314361  711801 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:05.380986  711801 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sl29r" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:36:07.386205  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:09.390413  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:11.391818  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:13.396961  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.029672109Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.042405588Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.042564352Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.042663861Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.052459429Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.052492355Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.052508888Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.063381791Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.063571021Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.063675001Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.086237925Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.08627319Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.091007494Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d017f9a-1841-44c1-ba68-b5123b2a7ab2 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.092511091Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f3bb0616-54e0-4d26-a2ee-e751e1c0af7b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.093861995Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk/dashboard-metrics-scraper" id=1acfab62-b509-45fd-9c5a-e7e5faeee112 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.094042429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.102802447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.103353898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.156344175Z" level=info msg="Created container 32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk/dashboard-metrics-scraper" id=1acfab62-b509-45fd-9c5a-e7e5faeee112 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.161868313Z" level=info msg="Starting container: 32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425" id=50fcdf13-71e0-4c71-a569-f7614347ef76 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.165746293Z" level=info msg="Started container" PID=1719 containerID=32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk/dashboard-metrics-scraper id=50fcdf13-71e0-4c71-a569-f7614347ef76 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1e331914c7ea5bbbc91745e56453da46903f9fc077203f37c4237e8573aa1b5
	Nov 15 10:36:09 no-preload-907610 conmon[1717]: conmon 32c5bec10debbffa97c8 <ninfo>: container 1719 exited with status 1
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.349380125Z" level=info msg="Removing container: e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52" id=61614f41-1365-45d8-bb1c-838a94f47893 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.361784398Z" level=info msg="Error loading conmon cgroup of container e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52: cgroup deleted" id=61614f41-1365-45d8-bb1c-838a94f47893 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.367269398Z" level=info msg="Removed container e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk/dashboard-metrics-scraper" id=61614f41-1365-45d8-bb1c-838a94f47893 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	32c5bec10debb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   3                   b1e331914c7ea       dashboard-metrics-scraper-6ffb444bf9-wp5nk   kubernetes-dashboard
	c406583649734       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           28 seconds ago       Running             storage-provisioner         2                   7fb66e5711814       storage-provisioner                          kube-system
	629e55498715b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   0c1fc54e35ccf       kubernetes-dashboard-855c9754f9-nf42b        kubernetes-dashboard
	3895648769f9b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   2f0b4ab85bc9c       coredns-66bc5c9577-ql8g6                     kube-system
	ac6df99cce17d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   3fc90c399713e       busybox                                      default
	e83f82003a368       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   2da48b8e87a3e       kube-proxy-rh8h4                             kube-system
	c244dd133e1dd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   282b27aad797e       kindnet-kgnjv                                kube-system
	718b69e5cb82f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           59 seconds ago       Exited              storage-provisioner         1                   7fb66e5711814       storage-provisioner                          kube-system
	e39b03ee83b22       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   17391725f794a       etcd-no-preload-907610                       kube-system
	fbba8e0ca18f1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   d29ab304e603f       kube-scheduler-no-preload-907610             kube-system
	2595af5ed79b0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   ff532881dc7ad       kube-apiserver-no-preload-907610             kube-system
	aa8b90296193a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   0dcc70746fe82       kube-controller-manager-no-preload-907610    kube-system
	
	
	==> coredns [3895648769f9b491dcabadc00916d0ddad17303c433290e0f2ea5f58450bca76] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53281 - 6538 "HINFO IN 140613460854729103.6544380428870274949. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020768817s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-907610
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-907610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=no-preload-907610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-907610
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:57 +0000   Sat, 15 Nov 2025 10:34:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:57 +0000   Sat, 15 Nov 2025 10:34:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:57 +0000   Sat, 15 Nov 2025 10:34:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:57 +0000   Sat, 15 Nov 2025 10:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-907610
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d6341372-a597-4e99-ab89-f00924067763
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-ql8g6                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-no-preload-907610                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-kgnjv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-no-preload-907610              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-no-preload-907610     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-rh8h4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-no-preload-907610              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wp5nk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nf42b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 113s                   kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Warning  CgroupV1                 2m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node no-preload-907610 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node no-preload-907610 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node no-preload-907610 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m                     kubelet          Node no-preload-907610 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m                     kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m                     kubelet          Node no-preload-907610 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m                     kubelet          Node no-preload-907610 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                     kubelet          Starting kubelet.
	  Normal   RegisteredNode           116s                   node-controller  Node no-preload-907610 event: Registered Node no-preload-907610 in Controller
	  Normal   NodeReady                98s                    kubelet          Node no-preload-907610 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node no-preload-907610 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node no-preload-907610 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node no-preload-907610 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node no-preload-907610 event: Registered Node no-preload-907610 in Controller
	
	
	==> dmesg <==
	[Nov15 10:12] overlayfs: idmapped layers are currently not supported
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e39b03ee83b22e578b9f5605c3bd8e0ef77ee33deddbb7aa1624c005fece9124] <==
	{"level":"warn","ts":"2025-11-15T10:35:15.090675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.117169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.136196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.161154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.177182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.214942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.238342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.269517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.304531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.334958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.375661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.414601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.435897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.448622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.465745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.484914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.506327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.522255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.576292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.598348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.632053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.655875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.671609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.690650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.748549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33712","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:16 up  5:18,  0 user,  load average: 3.92, 3.50, 2.94
	Linux no-preload-907610 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c244dd133e1dd4c2d238dd64ce7a3cf6ebc1eab1f51072dc25ab2e89edea3d0a] <==
	I1115 10:35:17.809899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:35:17.814762       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:35:17.814910       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:35:17.814923       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:35:17.814934       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:35:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:35:18.014604       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:35:18.022515       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:35:18.022561       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:35:18.022732       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:35:48.011706       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:35:48.014138       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 10:35:48.015507       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:35:48.015511       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 10:35:49.323493       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:35:49.323608       1 metrics.go:72] Registering metrics
	I1115 10:35:49.323720       1 controller.go:711] "Syncing nftables rules"
	I1115 10:35:58.012754       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:35:58.012891       1 main.go:301] handling current node
	I1115 10:36:08.016989       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:36:08.017024       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2595af5ed79b0d008d0a4a9885bb6bb2d922c8e0fc4984e57ea1078e606230d7] <==
	I1115 10:35:16.640497       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:35:16.640502       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:35:16.640508       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:35:16.675650       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:35:16.687874       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:35:16.688047       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:35:16.688063       1 policy_source.go:240] refreshing policies
	I1115 10:35:16.688303       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:35:16.691841       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:35:16.697131       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1115 10:35:16.702368       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:35:16.702980       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:35:16.703005       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:35:16.703705       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:17.065630       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:35:17.420593       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:35:17.694934       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:35:17.809051       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:35:17.848533       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:35:17.868798       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:35:17.973342       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.22.178"}
	I1115 10:35:17.999942       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.131.161"}
	I1115 10:35:20.066169       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:35:20.261571       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:35:20.610954       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [aa8b90296193a6708fd35513c6745262f53b36234f1f69ebb1d6aee50a60dfcd] <==
	I1115 10:35:20.065239       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:35:20.069425       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:35:20.072427       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:35:20.076911       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:35:20.082586       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:35:20.084783       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:35:20.086964       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:35:20.087244       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:20.088162       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:20.096128       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:35:20.098879       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:35:20.102788       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:35:20.102788       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:35:20.103041       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:35:20.103060       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:35:20.103074       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:35:20.104524       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:35:20.106517       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:35:20.106662       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:35:20.106760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-907610"
	I1115 10:35:20.106826       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:35:20.109080       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:35:20.112491       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:35:20.115058       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:35:20.118465       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [e83f82003a36807fb00707347761aad18e6318b7683492f9be8eb5f018407286] <==
	I1115 10:35:18.195937       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:35:18.339640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:35:18.441211       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:35:18.441328       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:35:18.441411       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:35:18.460932       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:35:18.460986       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:35:18.466473       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:35:18.466826       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:35:18.466851       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:18.467859       1 config.go:309] "Starting node config controller"
	I1115 10:35:18.467883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:35:18.470578       1 config.go:200] "Starting service config controller"
	I1115 10:35:18.470642       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:35:18.472044       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:35:18.472067       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:35:18.472085       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:35:18.472089       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:35:18.568713       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:35:18.570918       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:35:18.573156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:35:18.573171       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fbba8e0ca18f1bb361aade61f62504b671e8e02da9e21dc771c669d6472159f2] <==
	I1115 10:35:16.522171       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:16.528377       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:35:16.529079       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:35:16.529135       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:16.532137       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1115 10:35:16.570948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 10:35:16.571902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:35:16.572031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:35:16.572127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:35:16.572243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:35:16.572297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:35:16.572341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:35:16.572398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:35:16.572442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:35:16.572485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:35:16.572528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:35:16.572567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:35:16.572616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:35:16.572674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:35:16.572715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:35:16.572768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:35:16.572806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:35:16.572893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:35:16.572941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1115 10:35:18.133110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:35:20 no-preload-907610 kubelet[770]: I1115 10:35:20.874821     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwfz\" (UniqueName: \"kubernetes.io/projected/cc0070c5-5691-4c32-a0c4-91cd5ed4d27b-kube-api-access-tlwfz\") pod \"dashboard-metrics-scraper-6ffb444bf9-wp5nk\" (UID: \"cc0070c5-5691-4c32-a0c4-91cd5ed4d27b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk"
	Nov 15 10:35:20 no-preload-907610 kubelet[770]: I1115 10:35:20.874854     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n847\" (UniqueName: \"kubernetes.io/projected/a18b3230-1ea5-4199-abb6-f03a528c964f-kube-api-access-2n847\") pod \"kubernetes-dashboard-855c9754f9-nf42b\" (UID: \"a18b3230-1ea5-4199-abb6-f03a528c964f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nf42b"
	Nov 15 10:35:26 no-preload-907610 kubelet[770]: I1115 10:35:26.190648     770 scope.go:117] "RemoveContainer" containerID="a83f0ce14e84d59e68fd56b344f250878dfe739ec2d3dcaff67324c028050df6"
	Nov 15 10:35:27 no-preload-907610 kubelet[770]: I1115 10:35:27.214197     770 scope.go:117] "RemoveContainer" containerID="a83f0ce14e84d59e68fd56b344f250878dfe739ec2d3dcaff67324c028050df6"
	Nov 15 10:35:27 no-preload-907610 kubelet[770]: I1115 10:35:27.215222     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:27 no-preload-907610 kubelet[770]: E1115 10:35:27.215397     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:35:28 no-preload-907610 kubelet[770]: I1115 10:35:28.240738     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:28 no-preload-907610 kubelet[770]: E1115 10:35:28.240924     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:35:34 no-preload-907610 kubelet[770]: I1115 10:35:34.776528     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:34 no-preload-907610 kubelet[770]: E1115 10:35:34.777173     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: I1115 10:35:45.089673     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: I1115 10:35:45.282364     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: I1115 10:35:45.282667     770 scope.go:117] "RemoveContainer" containerID="e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: E1115 10:35:45.282828     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: I1115 10:35:45.311403     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nf42b" podStartSLOduration=15.423006737 podStartE2EDuration="25.311382861s" podCreationTimestamp="2025-11-15 10:35:20 +0000 UTC" firstStartedPulling="2025-11-15 10:35:21.064720476 +0000 UTC m=+9.234985491" lastFinishedPulling="2025-11-15 10:35:30.9530966 +0000 UTC m=+19.123361615" observedRunningTime="2025-11-15 10:35:31.268821103 +0000 UTC m=+19.439086126" watchObservedRunningTime="2025-11-15 10:35:45.311382861 +0000 UTC m=+33.481647892"
	Nov 15 10:35:48 no-preload-907610 kubelet[770]: I1115 10:35:48.292727     770 scope.go:117] "RemoveContainer" containerID="718b69e5cb82f61c6caf5f3e606e2ec1aa724f90f8a35bfe302191fef4b322d9"
	Nov 15 10:35:54 no-preload-907610 kubelet[770]: I1115 10:35:54.776829     770 scope.go:117] "RemoveContainer" containerID="e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52"
	Nov 15 10:35:54 no-preload-907610 kubelet[770]: E1115 10:35:54.777422     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:36:09 no-preload-907610 kubelet[770]: I1115 10:36:09.089834     770 scope.go:117] "RemoveContainer" containerID="e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52"
	Nov 15 10:36:09 no-preload-907610 kubelet[770]: I1115 10:36:09.347951     770 scope.go:117] "RemoveContainer" containerID="e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52"
	Nov 15 10:36:10 no-preload-907610 kubelet[770]: I1115 10:36:10.351935     770 scope.go:117] "RemoveContainer" containerID="32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425"
	Nov 15 10:36:10 no-preload-907610 kubelet[770]: E1115 10:36:10.352090     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:36:13 no-preload-907610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:36:13 no-preload-907610 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:36:13 no-preload-907610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [629e55498715b583593925f583bf65140ba0524b62727038c86feededab7c232] <==
	2025/11/15 10:35:31 Using namespace: kubernetes-dashboard
	2025/11/15 10:35:31 Using in-cluster config to connect to apiserver
	2025/11/15 10:35:31 Using secret token for csrf signing
	2025/11/15 10:35:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:35:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:35:31 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:35:31 Generating JWE encryption key
	2025/11/15 10:35:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:35:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:35:31 Initializing JWE encryption key from synchronized object
	2025/11/15 10:35:31 Creating in-cluster Sidecar client
	2025/11/15 10:35:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:31 Serving insecurely on HTTP port: 9090
	2025/11/15 10:36:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:31 Starting overwatch
	
	
	==> storage-provisioner [718b69e5cb82f61c6caf5f3e606e2ec1aa724f90f8a35bfe302191fef4b322d9] <==
	I1115 10:35:17.620959       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:35:47.623670       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c4065836497343e6d9303217966ac46ac647fdbce23f7e49368d3880af4e8fc6] <==
	I1115 10:35:48.350657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:35:48.350706       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:35:48.352807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:51.807740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:56.067963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:59.666223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:02.720067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.742681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.751244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:05.751433       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:36:05.757565       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-907610_21b537b7-0db2-4b30-afba-8a82f96a2376!
	I1115 10:36:05.754230       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd672018-af9b-4d26-a795-58bf6d65cf94", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-907610_21b537b7-0db2-4b30-afba-8a82f96a2376 became leader
	W1115 10:36:05.760770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.766417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:05.858321       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-907610_21b537b7-0db2-4b30-afba-8a82f96a2376!
	W1115 10:36:07.769569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:07.774063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:09.777449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:09.783166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:11.786346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:11.796653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:13.801182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:13.807384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:15.810743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:15.816786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-907610 -n no-preload-907610
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-907610 -n no-preload-907610: exit status 2 (502.11763ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-907610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-907610
helpers_test.go:243: (dbg) docker inspect no-preload-907610:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe",
	        "Created": "2025-11-15T10:33:27.637520569Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 708988,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:35:05.169391832Z",
	            "FinishedAt": "2025-11-15T10:35:04.329038782Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/hosts",
	        "LogPath": "/var/lib/docker/containers/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe/10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe-json.log",
	        "Name": "/no-preload-907610",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-907610:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-907610",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "10054bd2292b3458987d6b454ea01efbe2bf3a918561d12de7c838384c2ca8fe",
	                "LowerDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f24c5c42d15d6f88d0d6105f1e77425cd836537c03df126037a77923d3a043d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-907610",
	                "Source": "/var/lib/docker/volumes/no-preload-907610/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-907610",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-907610",
	                "name.minikube.sigs.k8s.io": "no-preload-907610",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfc3e0964586adb46e3f75fe61403239dfc2f77d70b983e01e41a0b1d17ae6b6",
	            "SandboxKey": "/var/run/docker/netns/bfc3e0964586",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-907610": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:c2:11:8b:05:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0446e1129b53a450726fcb48f165692a586d6e4eabe7e4a70c1e31a89bd483dd",
	                    "EndpointID": "88cf7b02221aed372985d73af2036ef968dd0edb68682ec447cf6b77cf49ba76",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-907610",
	                        "10054bd2292b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-907610 -n no-preload-907610
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-907610 -n no-preload-907610: exit status 2 (445.033486ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-907610 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-907610 logs -n 25: (1.633799099s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-115480 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-115480    │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ delete  │ -p cert-options-115480                                                                                                                                                                                                                        │ cert-options-115480    │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:30 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:30 UTC │ 15 Nov 25 10:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-448285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │                     │
	│ stop    │ -p old-k8s-version-448285 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:31 UTC │ 15 Nov 25 10:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-448285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:32 UTC │
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ image   │ old-k8s-version-448285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-448285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-845026 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-845026                                                                                                                                                                                                                     │ cert-expiration-845026 │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ stop    │ -p no-preload-907610 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p no-preload-907610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-531596 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596     │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ image   │ no-preload-907610 image list --format=json                                                                                                                                                                                                    │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610      │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:35:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:35:49.753036  711801 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:35:49.753219  711801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:49.753253  711801 out.go:374] Setting ErrFile to fd 2...
	I1115 10:35:49.753279  711801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:35:49.753575  711801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:35:49.754014  711801 out.go:368] Setting JSON to false
	I1115 10:35:49.755031  711801 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19101,"bootTime":1763183849,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:35:49.755125  711801 start.go:143] virtualization:  
	I1115 10:35:49.758095  711801 out.go:179] * [embed-certs-531596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:35:49.762002  711801 notify.go:221] Checking for updates...
	I1115 10:35:49.762566  711801 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:35:49.765724  711801 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:35:49.768730  711801 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:35:49.771575  711801 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:35:49.774410  711801 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:35:49.777254  711801 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1115 10:35:44.898974  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:47.398684  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:49.399215  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	I1115 10:35:49.780710  711801 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:49.781302  711801 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:35:49.807520  711801 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:35:49.807628  711801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:49.870303  711801 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:35:49.861306964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:35:49.870411  711801 docker.go:319] overlay module found
	I1115 10:35:49.874779  711801 out.go:179] * Using the docker driver based on existing profile
	I1115 10:35:49.877664  711801 start.go:309] selected driver: docker
	I1115 10:35:49.877683  711801 start.go:930] validating driver "docker" against &{Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:49.877793  711801 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:35:49.878510  711801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:35:49.936024  711801 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:35:49.927333597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:35:49.936370  711801 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:35:49.936401  711801 cni.go:84] Creating CNI manager for ""
	I1115 10:35:49.936460  711801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:49.936496  711801 start.go:353] cluster config:
	{Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:49.939616  711801 out.go:179] * Starting "embed-certs-531596" primary control-plane node in "embed-certs-531596" cluster
	I1115 10:35:49.942367  711801 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:35:49.945220  711801 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:35:49.948096  711801 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:35:49.948277  711801 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:49.948305  711801 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:35:49.948313  711801 cache.go:65] Caching tarball of preloaded images
	I1115 10:35:49.948380  711801 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:35:49.948394  711801 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:35:49.948507  711801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/config.json ...
	I1115 10:35:49.974085  711801 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:35:49.974109  711801 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:35:49.974123  711801 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:35:49.974146  711801 start.go:360] acquireMachinesLock for embed-certs-531596: {Name:mk92715fcdfed9f5936819aaa5d8bdc4948b9228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:35:49.974213  711801 start.go:364] duration metric: took 37.504µs to acquireMachinesLock for "embed-certs-531596"
	I1115 10:35:49.974238  711801 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:35:49.974248  711801 fix.go:54] fixHost starting: 
	I1115 10:35:49.974495  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:49.991856  711801 fix.go:112] recreateIfNeeded on embed-certs-531596: state=Stopped err=<nil>
	W1115 10:35:49.991888  711801 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:35:49.995175  711801 out.go:252] * Restarting existing docker container for "embed-certs-531596" ...
	I1115 10:35:49.995252  711801 cli_runner.go:164] Run: docker start embed-certs-531596
	I1115 10:35:50.279857  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:50.305476  711801 kic.go:430] container "embed-certs-531596" state is running.
	I1115 10:35:50.305979  711801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-531596
	I1115 10:35:50.330837  711801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/config.json ...
	I1115 10:35:50.331070  711801 machine.go:94] provisionDockerMachine start ...
	I1115 10:35:50.331144  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:50.354743  711801 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:50.355120  711801 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 10:35:50.355129  711801 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:35:50.356010  711801 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:35:53.509443  711801 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-531596
	
	I1115 10:35:53.509509  711801 ubuntu.go:182] provisioning hostname "embed-certs-531596"
	I1115 10:35:53.509634  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:53.528948  711801 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.529269  711801 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 10:35:53.529287  711801 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-531596 && echo "embed-certs-531596" | sudo tee /etc/hostname
	I1115 10:35:53.686496  711801 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-531596
	
	I1115 10:35:53.686600  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:53.706020  711801 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:53.706354  711801 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 10:35:53.706380  711801 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-531596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-531596/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-531596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:35:53.862003  711801 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:35:53.862036  711801 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:35:53.862083  711801 ubuntu.go:190] setting up certificates
	I1115 10:35:53.862107  711801 provision.go:84] configureAuth start
	I1115 10:35:53.862178  711801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-531596
	I1115 10:35:53.880802  711801 provision.go:143] copyHostCerts
	I1115 10:35:53.880870  711801 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:35:53.880886  711801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:35:53.880979  711801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:35:53.881100  711801 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:35:53.881111  711801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:35:53.881140  711801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:35:53.881215  711801 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:35:53.881227  711801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:35:53.881262  711801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:35:53.881329  711801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.embed-certs-531596 san=[127.0.0.1 192.168.76.2 embed-certs-531596 localhost minikube]
	I1115 10:35:54.508503  711801 provision.go:177] copyRemoteCerts
	I1115 10:35:54.508583  711801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:35:54.508632  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:54.526517  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:54.633375  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:35:54.651569  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:35:54.669746  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:35:54.688446  711801 provision.go:87] duration metric: took 826.311209ms to configureAuth
	I1115 10:35:54.688528  711801 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:35:54.688758  711801 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:54.688896  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:54.705949  711801 main.go:143] libmachine: Using SSH client type: native
	I1115 10:35:54.706254  711801 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1115 10:35:54.706272  711801 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1115 10:35:51.898281  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:53.898722  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	I1115 10:35:55.064245  711801 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:35:55.064270  711801 machine.go:97] duration metric: took 4.733182889s to provisionDockerMachine
	I1115 10:35:55.064282  711801 start.go:293] postStartSetup for "embed-certs-531596" (driver="docker")
	I1115 10:35:55.064293  711801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:35:55.064366  711801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:35:55.064411  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:55.091550  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:55.200062  711801 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:35:55.203449  711801 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:35:55.203482  711801 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:35:55.203494  711801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:35:55.203547  711801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:35:55.203627  711801 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:35:55.203733  711801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:35:55.211123  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:35:55.229848  711801 start.go:296] duration metric: took 165.551105ms for postStartSetup
	I1115 10:35:55.229928  711801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:35:55.229989  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:55.246572  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:55.350715  711801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:35:55.355585  711801 fix.go:56] duration metric: took 5.381329888s for fixHost
	I1115 10:35:55.355611  711801 start.go:83] releasing machines lock for "embed-certs-531596", held for 5.381385017s
	I1115 10:35:55.355694  711801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-531596
	I1115 10:35:55.372332  711801 ssh_runner.go:195] Run: cat /version.json
	I1115 10:35:55.372388  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:55.372394  711801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:35:55.372458  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:55.389721  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:55.404877  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:55.586955  711801 ssh_runner.go:195] Run: systemctl --version
	I1115 10:35:55.593433  711801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:35:55.628333  711801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:35:55.633154  711801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:35:55.633226  711801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:35:55.640879  711801 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:35:55.640901  711801 start.go:496] detecting cgroup driver to use...
	I1115 10:35:55.640931  711801 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:35:55.640977  711801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:35:55.656896  711801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:35:55.669739  711801 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:35:55.669806  711801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:35:55.685516  711801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:35:55.698674  711801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:35:55.812679  711801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:35:55.944477  711801 docker.go:234] disabling docker service ...
	I1115 10:35:55.944583  711801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:35:55.961018  711801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:35:55.975907  711801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:35:56.100147  711801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:35:56.218495  711801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:35:56.232246  711801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:35:56.246593  711801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:35:56.246667  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.255799  711801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:35:56.255879  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.265504  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.275049  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.284020  711801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:35:56.292050  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.300463  711801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.308798  711801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:35:56.318343  711801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:35:56.326133  711801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:35:56.335144  711801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:56.451080  711801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:35:56.586083  711801 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:35:56.586193  711801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:35:56.590671  711801 start.go:564] Will wait 60s for crictl version
	I1115 10:35:56.590784  711801 ssh_runner.go:195] Run: which crictl
	I1115 10:35:56.594478  711801 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:35:56.619885  711801 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:35:56.620047  711801 ssh_runner.go:195] Run: crio --version
	I1115 10:35:56.652961  711801 ssh_runner.go:195] Run: crio --version
	I1115 10:35:56.695739  711801 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:35:56.698523  711801 cli_runner.go:164] Run: docker network inspect embed-certs-531596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:35:56.714297  711801 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:35:56.718219  711801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:56.727646  711801 kubeadm.go:884] updating cluster {Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:35:56.727769  711801 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:35:56.727838  711801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:56.763501  711801 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:56.763525  711801 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:35:56.763580  711801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:35:56.796848  711801 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:35:56.796874  711801 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:35:56.796883  711801 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:35:56.796976  711801 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-531596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:35:56.797071  711801 ssh_runner.go:195] Run: crio config
	I1115 10:35:56.865017  711801 cni.go:84] Creating CNI manager for ""
	I1115 10:35:56.865042  711801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:35:56.865063  711801 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:35:56.865113  711801 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-531596 NodeName:embed-certs-531596 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:35:56.865301  711801 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-531596"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:35:56.865411  711801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:35:56.873404  711801 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:35:56.873487  711801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:35:56.880915  711801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:35:56.894166  711801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:35:56.908437  711801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1115 10:35:56.921152  711801 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:35:56.924728  711801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:35:56.934279  711801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:57.073637  711801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:57.092585  711801 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596 for IP: 192.168.76.2
	I1115 10:35:57.092694  711801 certs.go:195] generating shared ca certs ...
	I1115 10:35:57.092767  711801 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:57.092975  711801 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:35:57.093076  711801 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:35:57.093134  711801 certs.go:257] generating profile certs ...
	I1115 10:35:57.093282  711801 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/client.key
	I1115 10:35:57.093426  711801 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key.8b8c468c
	I1115 10:35:57.093518  711801 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.key
	I1115 10:35:57.093727  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:35:57.093842  711801 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:35:57.093885  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:35:57.093937  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:35:57.094019  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:35:57.094083  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:35:57.094186  711801 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:35:57.094999  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:35:57.122829  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:35:57.144957  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:35:57.171048  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:35:57.206105  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:35:57.232346  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:35:57.255195  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:35:57.283898  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/embed-certs-531596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:35:57.306687  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:35:57.327895  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:35:57.346579  711801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:35:57.367062  711801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:35:57.380880  711801 ssh_runner.go:195] Run: openssl version
	I1115 10:35:57.387280  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:35:57.401443  711801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:57.405280  711801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:57.405364  711801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:35:57.449591  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:35:57.457790  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:35:57.466846  711801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:35:57.470583  711801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:35:57.470654  711801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:35:57.516332  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:35:57.525929  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:35:57.535407  711801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:35:57.539534  711801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:35:57.539628  711801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:35:57.581145  711801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:35:57.589917  711801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:35:57.593828  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:35:57.634282  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:35:57.676498  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:35:57.721981  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:35:57.767529  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:35:57.812087  711801 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:35:57.869941  711801 kubeadm.go:401] StartCluster: {Name:embed-certs-531596 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-531596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:35:57.870073  711801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:35:57.870195  711801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:35:57.934016  711801 cri.go:89] found id: "fb296649ab4b3d918ce7358336368732c13315596a50250e0c726940c17152bc"
	I1115 10:35:57.934088  711801 cri.go:89] found id: "f50de0346fbea41f639b33ec5f1eff63239807868eacf5aad15a6baeb58568df"
	I1115 10:35:57.934123  711801 cri.go:89] found id: "8c893770fdb03a4e37b1d08381d9addac2d7610c1a9489454b5c254477699b17"
	I1115 10:35:57.934149  711801 cri.go:89] found id: ""
	I1115 10:35:57.934233  711801 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:35:57.950151  711801 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:35:57Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:35:57.950282  711801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:35:57.973074  711801 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:35:57.973143  711801 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:35:57.973230  711801 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:35:57.991025  711801 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:35:57.991706  711801 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-531596" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:35:57.992039  711801 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-514793/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-531596" cluster setting kubeconfig missing "embed-certs-531596" context setting]
	I1115 10:35:57.992590  711801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:57.994262  711801 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:35:58.006883  711801 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 10:35:58.006921  711801 kubeadm.go:602] duration metric: took 33.757665ms to restartPrimaryControlPlane
	I1115 10:35:58.006932  711801 kubeadm.go:403] duration metric: took 137.003937ms to StartCluster
	I1115 10:35:58.006951  711801 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:58.007028  711801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:35:58.008692  711801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:35:58.009307  711801 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:35:58.010357  711801 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:35:58.010605  711801 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:35:58.010769  711801 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-531596"
	I1115 10:35:58.010788  711801 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-531596"
	W1115 10:35:58.010795  711801 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:35:58.010820  711801 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:35:58.011368  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:58.011695  711801 addons.go:70] Setting dashboard=true in profile "embed-certs-531596"
	I1115 10:35:58.011749  711801 addons.go:239] Setting addon dashboard=true in "embed-certs-531596"
	W1115 10:35:58.011775  711801 addons.go:248] addon dashboard should already be in state true
	I1115 10:35:58.011825  711801 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:35:58.012358  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:58.015739  711801 out.go:179] * Verifying Kubernetes components...
	I1115 10:35:58.016258  711801 addons.go:70] Setting default-storageclass=true in profile "embed-certs-531596"
	I1115 10:35:58.016284  711801 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-531596"
	I1115 10:35:58.016685  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:58.033643  711801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:35:58.057858  711801 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:35:58.061136  711801 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:35:58.066942  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:35:58.066975  711801 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:35:58.067058  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:58.089946  711801 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:35:58.093259  711801 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:58.093281  711801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:35:58.093357  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:58.109773  711801 addons.go:239] Setting addon default-storageclass=true in "embed-certs-531596"
	W1115 10:35:58.109798  711801 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:35:58.109823  711801 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:35:58.110297  711801 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:35:58.125827  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:58.157837  711801 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:58.157859  711801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:35:58.157919  711801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:35:58.165440  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:58.188060  711801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:35:58.351439  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:35:58.351512  711801 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:35:58.399704  711801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:35:58.426979  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:35:58.427054  711801 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:35:58.437121  711801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:35:58.443794  711801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:35:58.494188  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:35:58.494216  711801 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:35:58.583926  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:35:58.583950  711801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:35:58.659915  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:35:58.659942  711801 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:35:58.685967  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:35:58.686042  711801 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:35:58.706238  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:35:58.706312  711801 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:35:58.723939  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:35:58.724010  711801 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:35:58.740855  711801 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:35:58.740928  711801 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:35:58.756628  711801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1115 10:35:55.899411  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	W1115 10:35:57.900621  708863 pod_ready.go:104] pod "coredns-66bc5c9577-ql8g6" is not "Ready", error: <nil>
	I1115 10:35:59.398433  708863 pod_ready.go:94] pod "coredns-66bc5c9577-ql8g6" is "Ready"
	I1115 10:35:59.398458  708863 pod_ready.go:86] duration metric: took 41.005189392s for pod "coredns-66bc5c9577-ql8g6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.401128  708863 pod_ready.go:83] waiting for pod "etcd-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.407266  708863 pod_ready.go:94] pod "etcd-no-preload-907610" is "Ready"
	I1115 10:35:59.407290  708863 pod_ready.go:86] duration metric: took 6.100909ms for pod "etcd-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.409671  708863 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.416294  708863 pod_ready.go:94] pod "kube-apiserver-no-preload-907610" is "Ready"
	I1115 10:35:59.416317  708863 pod_ready.go:86] duration metric: took 6.627827ms for pod "kube-apiserver-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.419547  708863 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.597026  708863 pod_ready.go:94] pod "kube-controller-manager-no-preload-907610" is "Ready"
	I1115 10:35:59.597108  708863 pod_ready.go:86] duration metric: took 177.539324ms for pod "kube-controller-manager-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:35:59.797018  708863 pod_ready.go:83] waiting for pod "kube-proxy-rh8h4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:00.197947  708863 pod_ready.go:94] pod "kube-proxy-rh8h4" is "Ready"
	I1115 10:36:00.197991  708863 pod_ready.go:86] duration metric: took 400.890081ms for pod "kube-proxy-rh8h4" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:00.398221  708863 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:00.796981  708863 pod_ready.go:94] pod "kube-scheduler-no-preload-907610" is "Ready"
	I1115 10:36:00.797011  708863 pod_ready.go:86] duration metric: took 398.714167ms for pod "kube-scheduler-no-preload-907610" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:00.797024  708863 pod_ready.go:40] duration metric: took 42.407484886s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:00.889204  708863 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:36:00.892291  708863 out.go:179] * Done! kubectl is now configured to use "no-preload-907610" cluster and "default" namespace by default
	I1115 10:36:04.516591  711801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.116806352s)
	I1115 10:36:04.516652  711801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.079467396s)
	I1115 10:36:04.516964  711801 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.073134256s)
	I1115 10:36:04.516997  711801 node_ready.go:35] waiting up to 6m0s for node "embed-certs-531596" to be "Ready" ...
	I1115 10:36:04.543275  711801 node_ready.go:49] node "embed-certs-531596" is "Ready"
	I1115 10:36:04.543356  711801 node_ready.go:38] duration metric: took 26.345801ms for node "embed-certs-531596" to be "Ready" ...
	I1115 10:36:04.543388  711801 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:36:04.543553  711801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:36:04.599143  711801 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.842417796s)
	I1115 10:36:04.599404  711801 api_server.go:72] duration metric: took 6.590057182s to wait for apiserver process to appear ...
	I1115 10:36:04.599431  711801 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:36:04.599449  711801 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:36:04.602438  711801 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-531596 addons enable metrics-server
	
	I1115 10:36:04.605324  711801 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1115 10:36:04.608722  711801 addons.go:515] duration metric: took 6.598103596s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1115 10:36:04.609404  711801 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:36:04.609430  711801 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:36:05.099826  711801 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:36:05.147211  711801 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:36:05.150787  711801 api_server.go:141] control plane version: v1.34.1
	I1115 10:36:05.150820  711801 api_server.go:131] duration metric: took 551.381846ms to wait for apiserver health ...
	I1115 10:36:05.150830  711801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:36:05.175106  711801 system_pods.go:59] 8 kube-system pods found
	I1115 10:36:05.175143  711801 system_pods.go:61] "coredns-66bc5c9577-sl29r" [01a3916e-f489-4ca0-aa5f-05b2370df255] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:05.175153  711801 system_pods.go:61] "etcd-embed-certs-531596" [0c093954-3401-4c0a-8691-4d5253364a1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:05.175161  711801 system_pods.go:61] "kindnet-9pzmc" [cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 10:36:05.175169  711801 system_pods.go:61] "kube-apiserver-embed-certs-531596" [ec3eab77-05c0-40b7-b2ba-8610e4e2f33c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:05.175177  711801 system_pods.go:61] "kube-controller-manager-embed-certs-531596" [2de4cb12-0355-48ca-8288-595acb3acfc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:05.175183  711801 system_pods.go:61] "kube-proxy-nqfl8" [32c8087e-941a-4953-ae21-a83d98b0fc8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:36:05.175195  711801 system_pods.go:61] "kube-scheduler-embed-certs-531596" [dfe838b5-ff4d-45ba-b012-1a8e6c155b63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:05.175201  711801 system_pods.go:61] "storage-provisioner" [2feb3053-812e-439e-b003-38aa75d3cf38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:05.175207  711801 system_pods.go:74] duration metric: took 24.371054ms to wait for pod list to return data ...
	I1115 10:36:05.175215  711801 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:36:05.240808  711801 default_sa.go:45] found service account: "default"
	I1115 10:36:05.240837  711801 default_sa.go:55] duration metric: took 65.61505ms for default service account to be created ...
	I1115 10:36:05.240863  711801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:36:05.276191  711801 system_pods.go:86] 8 kube-system pods found
	I1115 10:36:05.276294  711801 system_pods.go:89] "coredns-66bc5c9577-sl29r" [01a3916e-f489-4ca0-aa5f-05b2370df255] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:36:05.276321  711801 system_pods.go:89] "etcd-embed-certs-531596" [0c093954-3401-4c0a-8691-4d5253364a1b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:36:05.276362  711801 system_pods.go:89] "kindnet-9pzmc" [cae08f8f-7e2f-4f7b-a8e3-dddd7f2a4f22] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 10:36:05.276395  711801 system_pods.go:89] "kube-apiserver-embed-certs-531596" [ec3eab77-05c0-40b7-b2ba-8610e4e2f33c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:36:05.276438  711801 system_pods.go:89] "kube-controller-manager-embed-certs-531596" [2de4cb12-0355-48ca-8288-595acb3acfc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:36:05.276467  711801 system_pods.go:89] "kube-proxy-nqfl8" [32c8087e-941a-4953-ae21-a83d98b0fc8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:36:05.276496  711801 system_pods.go:89] "kube-scheduler-embed-certs-531596" [dfe838b5-ff4d-45ba-b012-1a8e6c155b63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:36:05.276537  711801 system_pods.go:89] "storage-provisioner" [2feb3053-812e-439e-b003-38aa75d3cf38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:36:05.276565  711801 system_pods.go:126] duration metric: took 35.669447ms to wait for k8s-apps to be running ...
	I1115 10:36:05.276590  711801 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:36:05.276696  711801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:36:05.294553  711801 system_svc.go:56] duration metric: took 17.944985ms WaitForService to wait for kubelet
	I1115 10:36:05.294637  711801 kubeadm.go:587] duration metric: took 7.285276759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:05.294672  711801 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:36:05.308114  711801 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:36:05.308196  711801 node_conditions.go:123] node cpu capacity is 2
	I1115 10:36:05.308234  711801 node_conditions.go:105] duration metric: took 13.529297ms to run NodePressure ...
	I1115 10:36:05.308279  711801 start.go:242] waiting for startup goroutines ...
	I1115 10:36:05.308305  711801 start.go:247] waiting for cluster config update ...
	I1115 10:36:05.308330  711801 start.go:256] writing updated cluster config ...
	I1115 10:36:05.308678  711801 ssh_runner.go:195] Run: rm -f paused
	I1115 10:36:05.314361  711801 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:05.380986  711801 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sl29r" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:36:07.386205  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:09.390413  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:11.391818  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:13.396961  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.029672109Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.042405588Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.042564352Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.042663861Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.052459429Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.052492355Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.052508888Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.063381791Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.063571021Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.063675001Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.086237925Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:35:58 no-preload-907610 crio[653]: time="2025-11-15T10:35:58.08627319Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.091007494Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d017f9a-1841-44c1-ba68-b5123b2a7ab2 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.092511091Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f3bb0616-54e0-4d26-a2ee-e751e1c0af7b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.093861995Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk/dashboard-metrics-scraper" id=1acfab62-b509-45fd-9c5a-e7e5faeee112 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.094042429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.102802447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.103353898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.156344175Z" level=info msg="Created container 32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk/dashboard-metrics-scraper" id=1acfab62-b509-45fd-9c5a-e7e5faeee112 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.161868313Z" level=info msg="Starting container: 32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425" id=50fcdf13-71e0-4c71-a569-f7614347ef76 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.165746293Z" level=info msg="Started container" PID=1719 containerID=32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk/dashboard-metrics-scraper id=50fcdf13-71e0-4c71-a569-f7614347ef76 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1e331914c7ea5bbbc91745e56453da46903f9fc077203f37c4237e8573aa1b5
	Nov 15 10:36:09 no-preload-907610 conmon[1717]: conmon 32c5bec10debbffa97c8 <ninfo>: container 1719 exited with status 1
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.349380125Z" level=info msg="Removing container: e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52" id=61614f41-1365-45d8-bb1c-838a94f47893 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.361784398Z" level=info msg="Error loading conmon cgroup of container e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52: cgroup deleted" id=61614f41-1365-45d8-bb1c-838a94f47893 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:36:09 no-preload-907610 crio[653]: time="2025-11-15T10:36:09.367269398Z" level=info msg="Removed container e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk/dashboard-metrics-scraper" id=61614f41-1365-45d8-bb1c-838a94f47893 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	32c5bec10debb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   b1e331914c7ea       dashboard-metrics-scraper-6ffb444bf9-wp5nk   kubernetes-dashboard
	c406583649734       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           31 seconds ago       Running             storage-provisioner         2                   7fb66e5711814       storage-provisioner                          kube-system
	629e55498715b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   0c1fc54e35ccf       kubernetes-dashboard-855c9754f9-nf42b        kubernetes-dashboard
	3895648769f9b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   2f0b4ab85bc9c       coredns-66bc5c9577-ql8g6                     kube-system
	ac6df99cce17d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   3fc90c399713e       busybox                                      default
	e83f82003a368       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   2da48b8e87a3e       kube-proxy-rh8h4                             kube-system
	c244dd133e1dd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   282b27aad797e       kindnet-kgnjv                                kube-system
	718b69e5cb82f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           About a minute ago   Exited              storage-provisioner         1                   7fb66e5711814       storage-provisioner                          kube-system
	e39b03ee83b22       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   17391725f794a       etcd-no-preload-907610                       kube-system
	fbba8e0ca18f1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   d29ab304e603f       kube-scheduler-no-preload-907610             kube-system
	2595af5ed79b0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   ff532881dc7ad       kube-apiserver-no-preload-907610             kube-system
	aa8b90296193a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   0dcc70746fe82       kube-controller-manager-no-preload-907610    kube-system
	
	
	==> coredns [3895648769f9b491dcabadc00916d0ddad17303c433290e0f2ea5f58450bca76] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53281 - 6538 "HINFO IN 140613460854729103.6544380428870274949. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020768817s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-907610
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-907610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=no-preload-907610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-907610
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:35:57 +0000   Sat, 15 Nov 2025 10:34:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:35:57 +0000   Sat, 15 Nov 2025 10:34:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:35:57 +0000   Sat, 15 Nov 2025 10:34:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:35:57 +0000   Sat, 15 Nov 2025 10:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-907610
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d6341372-a597-4e99-ab89-f00924067763
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-ql8g6                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 etcd-no-preload-907610                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-kgnjv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-no-preload-907610              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-no-preload-907610     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-rh8h4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-no-preload-907610              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wp5nk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nf42b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 116s                   kube-proxy       
	  Normal   Starting                 61s                    kube-proxy       
	  Warning  CgroupV1                 2m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node no-preload-907610 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node no-preload-907610 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node no-preload-907610 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m3s                   kubelet          Node no-preload-907610 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m3s                   kubelet          Node no-preload-907610 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s                   kubelet          Node no-preload-907610 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           119s                   node-controller  Node no-preload-907610 event: Registered Node no-preload-907610 in Controller
	  Normal   NodeReady                101s                   kubelet          Node no-preload-907610 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node no-preload-907610 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node no-preload-907610 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node no-preload-907610 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                    node-controller  Node no-preload-907610 event: Registered Node no-preload-907610 in Controller
	
	
	==> dmesg <==
	[Nov15 10:12] overlayfs: idmapped layers are currently not supported
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e39b03ee83b22e578b9f5605c3bd8e0ef77ee33deddbb7aa1624c005fece9124] <==
	{"level":"warn","ts":"2025-11-15T10:35:15.090675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.117169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.136196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.161154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.177182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.214942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.238342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.269517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.304531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.334958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.375661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.414601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.435897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.448622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.465745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.484914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.506327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.522255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.576292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.598348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.632053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.655875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.671609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.690650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:35:15.748549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33712","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:19 up  5:18,  0 user,  load average: 4.09, 3.54, 2.96
	Linux no-preload-907610 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c244dd133e1dd4c2d238dd64ce7a3cf6ebc1eab1f51072dc25ab2e89edea3d0a] <==
	I1115 10:35:17.809899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:35:17.814762       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:35:17.814910       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:35:17.814923       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:35:17.814934       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:35:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:35:18.014604       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:35:18.022515       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:35:18.022561       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:35:18.022732       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:35:48.011706       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:35:48.014138       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 10:35:48.015507       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:35:48.015511       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1115 10:35:49.323493       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:35:49.323608       1 metrics.go:72] Registering metrics
	I1115 10:35:49.323720       1 controller.go:711] "Syncing nftables rules"
	I1115 10:35:58.012754       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:35:58.012891       1 main.go:301] handling current node
	I1115 10:36:08.016989       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:36:08.017024       1 main.go:301] handling current node
	I1115 10:36:18.017085       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:36:18.017180       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2595af5ed79b0d008d0a4a9885bb6bb2d922c8e0fc4984e57ea1078e606230d7] <==
	I1115 10:35:16.640497       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:35:16.640502       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:35:16.640508       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:35:16.675650       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:35:16.687874       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:35:16.688047       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:35:16.688063       1 policy_source.go:240] refreshing policies
	I1115 10:35:16.688303       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:35:16.691841       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:35:16.697131       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1115 10:35:16.702368       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:35:16.702980       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:35:16.703005       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:35:16.703705       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:35:17.065630       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:35:17.420593       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:35:17.694934       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:35:17.809051       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:35:17.848533       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:35:17.868798       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:35:17.973342       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.22.178"}
	I1115 10:35:17.999942       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.131.161"}
	I1115 10:35:20.066169       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:35:20.261571       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:35:20.610954       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [aa8b90296193a6708fd35513c6745262f53b36234f1f69ebb1d6aee50a60dfcd] <==
	I1115 10:35:20.065239       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:35:20.069425       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:35:20.072427       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:35:20.076911       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:35:20.082586       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:35:20.084783       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:35:20.086964       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:35:20.087244       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:35:20.088162       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:35:20.096128       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:35:20.098879       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:35:20.102788       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:35:20.102788       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:35:20.103041       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:35:20.103060       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:35:20.103074       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:35:20.104524       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:35:20.106517       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:35:20.106662       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:35:20.106760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-907610"
	I1115 10:35:20.106826       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:35:20.109080       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:35:20.112491       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:35:20.115058       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:35:20.118465       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [e83f82003a36807fb00707347761aad18e6318b7683492f9be8eb5f018407286] <==
	I1115 10:35:18.195937       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:35:18.339640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:35:18.441211       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:35:18.441328       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:35:18.441411       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:35:18.460932       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:35:18.460986       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:35:18.466473       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:35:18.466826       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:35:18.466851       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:18.467859       1 config.go:309] "Starting node config controller"
	I1115 10:35:18.467883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:35:18.470578       1 config.go:200] "Starting service config controller"
	I1115 10:35:18.470642       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:35:18.472044       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:35:18.472067       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:35:18.472085       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:35:18.472089       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:35:18.568713       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:35:18.570918       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:35:18.573156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:35:18.573171       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fbba8e0ca18f1bb361aade61f62504b671e8e02da9e21dc771c669d6472159f2] <==
	I1115 10:35:16.522171       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:35:16.528377       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:35:16.529079       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:35:16.529135       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:35:16.532137       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1115 10:35:16.570948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 10:35:16.571902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:35:16.572031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:35:16.572127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:35:16.572243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:35:16.572297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:35:16.572341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:35:16.572398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:35:16.572442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:35:16.572485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:35:16.572528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:35:16.572567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:35:16.572616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:35:16.572674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:35:16.572715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:35:16.572768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:35:16.572806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:35:16.572893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:35:16.572941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1115 10:35:18.133110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:35:20 no-preload-907610 kubelet[770]: I1115 10:35:20.874821     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwfz\" (UniqueName: \"kubernetes.io/projected/cc0070c5-5691-4c32-a0c4-91cd5ed4d27b-kube-api-access-tlwfz\") pod \"dashboard-metrics-scraper-6ffb444bf9-wp5nk\" (UID: \"cc0070c5-5691-4c32-a0c4-91cd5ed4d27b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk"
	Nov 15 10:35:20 no-preload-907610 kubelet[770]: I1115 10:35:20.874854     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n847\" (UniqueName: \"kubernetes.io/projected/a18b3230-1ea5-4199-abb6-f03a528c964f-kube-api-access-2n847\") pod \"kubernetes-dashboard-855c9754f9-nf42b\" (UID: \"a18b3230-1ea5-4199-abb6-f03a528c964f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nf42b"
	Nov 15 10:35:26 no-preload-907610 kubelet[770]: I1115 10:35:26.190648     770 scope.go:117] "RemoveContainer" containerID="a83f0ce14e84d59e68fd56b344f250878dfe739ec2d3dcaff67324c028050df6"
	Nov 15 10:35:27 no-preload-907610 kubelet[770]: I1115 10:35:27.214197     770 scope.go:117] "RemoveContainer" containerID="a83f0ce14e84d59e68fd56b344f250878dfe739ec2d3dcaff67324c028050df6"
	Nov 15 10:35:27 no-preload-907610 kubelet[770]: I1115 10:35:27.215222     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:27 no-preload-907610 kubelet[770]: E1115 10:35:27.215397     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:35:28 no-preload-907610 kubelet[770]: I1115 10:35:28.240738     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:28 no-preload-907610 kubelet[770]: E1115 10:35:28.240924     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:35:34 no-preload-907610 kubelet[770]: I1115 10:35:34.776528     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:34 no-preload-907610 kubelet[770]: E1115 10:35:34.777173     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: I1115 10:35:45.089673     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: I1115 10:35:45.282364     770 scope.go:117] "RemoveContainer" containerID="d0dc57b6e28aec1e020caf2f8073802f3889e1defc636418e7cc9aba919f35cb"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: I1115 10:35:45.282667     770 scope.go:117] "RemoveContainer" containerID="e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: E1115 10:35:45.282828     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:35:45 no-preload-907610 kubelet[770]: I1115 10:35:45.311403     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nf42b" podStartSLOduration=15.423006737 podStartE2EDuration="25.311382861s" podCreationTimestamp="2025-11-15 10:35:20 +0000 UTC" firstStartedPulling="2025-11-15 10:35:21.064720476 +0000 UTC m=+9.234985491" lastFinishedPulling="2025-11-15 10:35:30.9530966 +0000 UTC m=+19.123361615" observedRunningTime="2025-11-15 10:35:31.268821103 +0000 UTC m=+19.439086126" watchObservedRunningTime="2025-11-15 10:35:45.311382861 +0000 UTC m=+33.481647892"
	Nov 15 10:35:48 no-preload-907610 kubelet[770]: I1115 10:35:48.292727     770 scope.go:117] "RemoveContainer" containerID="718b69e5cb82f61c6caf5f3e606e2ec1aa724f90f8a35bfe302191fef4b322d9"
	Nov 15 10:35:54 no-preload-907610 kubelet[770]: I1115 10:35:54.776829     770 scope.go:117] "RemoveContainer" containerID="e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52"
	Nov 15 10:35:54 no-preload-907610 kubelet[770]: E1115 10:35:54.777422     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:36:09 no-preload-907610 kubelet[770]: I1115 10:36:09.089834     770 scope.go:117] "RemoveContainer" containerID="e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52"
	Nov 15 10:36:09 no-preload-907610 kubelet[770]: I1115 10:36:09.347951     770 scope.go:117] "RemoveContainer" containerID="e7301ab27b5f51abd79b144f0d8377ca30f957dd28082b606b17e0d5b3af3c52"
	Nov 15 10:36:10 no-preload-907610 kubelet[770]: I1115 10:36:10.351935     770 scope.go:117] "RemoveContainer" containerID="32c5bec10debbffa97c84420edb6c5f01fddca1d08b8be8234d5997d4cf77425"
	Nov 15 10:36:10 no-preload-907610 kubelet[770]: E1115 10:36:10.352090     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wp5nk_kubernetes-dashboard(cc0070c5-5691-4c32-a0c4-91cd5ed4d27b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wp5nk" podUID="cc0070c5-5691-4c32-a0c4-91cd5ed4d27b"
	Nov 15 10:36:13 no-preload-907610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:36:13 no-preload-907610 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:36:13 no-preload-907610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [629e55498715b583593925f583bf65140ba0524b62727038c86feededab7c232] <==
	2025/11/15 10:35:31 Using namespace: kubernetes-dashboard
	2025/11/15 10:35:31 Using in-cluster config to connect to apiserver
	2025/11/15 10:35:31 Using secret token for csrf signing
	2025/11/15 10:35:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:35:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:35:31 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:35:31 Generating JWE encryption key
	2025/11/15 10:35:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:35:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:35:31 Initializing JWE encryption key from synchronized object
	2025/11/15 10:35:31 Creating in-cluster Sidecar client
	2025/11/15 10:35:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:31 Serving insecurely on HTTP port: 9090
	2025/11/15 10:36:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:35:31 Starting overwatch
	
	
	==> storage-provisioner [718b69e5cb82f61c6caf5f3e606e2ec1aa724f90f8a35bfe302191fef4b322d9] <==
	I1115 10:35:17.620959       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:35:47.623670       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c4065836497343e6d9303217966ac46ac647fdbce23f7e49368d3880af4e8fc6] <==
	W1115 10:35:56.067963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:35:59.666223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:02.720067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.742681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.751244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:05.751433       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:36:05.757565       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-907610_21b537b7-0db2-4b30-afba-8a82f96a2376!
	I1115 10:36:05.754230       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd672018-af9b-4d26-a795-58bf6d65cf94", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-907610_21b537b7-0db2-4b30-afba-8a82f96a2376 became leader
	W1115 10:36:05.760770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:05.766417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:05.858321       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-907610_21b537b7-0db2-4b30-afba-8a82f96a2376!
	W1115 10:36:07.769569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:07.774063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:09.777449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:09.783166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:11.786346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:11.796653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:13.801182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:13.807384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:15.810743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:15.816786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:17.823107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:17.834755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:19.838546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:19.842950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-907610 -n no-preload-907610
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-907610 -n no-preload-907610: exit status 2 (499.480204ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-907610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-531596 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-531596 --alsologtostderr -v=1: exit status 80 (2.15457422s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-531596 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:37:00.526692  717626 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:37:00.526887  717626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:00.526902  717626 out.go:374] Setting ErrFile to fd 2...
	I1115 10:37:00.526908  717626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:00.527253  717626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:37:00.527543  717626 out.go:368] Setting JSON to false
	I1115 10:37:00.527577  717626 mustload.go:66] Loading cluster: embed-certs-531596
	I1115 10:37:00.528119  717626 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:00.529037  717626 cli_runner.go:164] Run: docker container inspect embed-certs-531596 --format={{.State.Status}}
	I1115 10:37:00.549263  717626 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:37:00.549651  717626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:37:00.618397  717626 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 10:37:00.608359964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:37:00.619192  717626 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-531596 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:37:00.624577  717626 out.go:179] * Pausing node embed-certs-531596 ... 
	I1115 10:37:00.627470  717626 host.go:66] Checking if "embed-certs-531596" exists ...
	I1115 10:37:00.628518  717626 ssh_runner.go:195] Run: systemctl --version
	I1115 10:37:00.628811  717626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-531596
	I1115 10:37:00.658617  717626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/embed-certs-531596/id_rsa Username:docker}
	I1115 10:37:00.768199  717626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:37:00.791956  717626 pause.go:52] kubelet running: true
	I1115 10:37:00.792061  717626 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:37:01.056868  717626 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:37:01.056964  717626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:37:01.178719  717626 cri.go:89] found id: "de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558"
	I1115 10:37:01.178745  717626 cri.go:89] found id: "485037dfcd265044f912626ceea2d533281e3d74aeea571cd809b54553eccd15"
	I1115 10:37:01.178751  717626 cri.go:89] found id: "befdacbffc79f18ce39527c245ffbfb64f06c3603bc06289cea4dadfac5cbe3c"
	I1115 10:37:01.178754  717626 cri.go:89] found id: "7686787474fca52c2819c5885171525469c5859b47781117bc24b263d240bda7"
	I1115 10:37:01.178758  717626 cri.go:89] found id: "8d6ee23472f5075985606028a63174d0467fcc73a763b67418774407ccb028af"
	I1115 10:37:01.178761  717626 cri.go:89] found id: "6574dcaec8359c575d60f7f9b4b4a31ffe5a8ffe0a63577c96e18b02396872f9"
	I1115 10:37:01.178765  717626 cri.go:89] found id: "fb296649ab4b3d918ce7358336368732c13315596a50250e0c726940c17152bc"
	I1115 10:37:01.178769  717626 cri.go:89] found id: "f50de0346fbea41f639b33ec5f1eff63239807868eacf5aad15a6baeb58568df"
	I1115 10:37:01.178772  717626 cri.go:89] found id: "8c893770fdb03a4e37b1d08381d9addac2d7610c1a9489454b5c254477699b17"
	I1115 10:37:01.178778  717626 cri.go:89] found id: "edffc037317d2387077ef0776d6226248d3d851d0340387c56497d383eb1b924"
	I1115 10:37:01.178781  717626 cri.go:89] found id: "46685d2b8f35198cf7577a7610a7caf8d7d1fab9df4015f3cd9180ef344ac005"
	I1115 10:37:01.178784  717626 cri.go:89] found id: ""
	I1115 10:37:01.178841  717626 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:37:01.194325  717626 retry.go:31] will retry after 325.43257ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:01Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:37:01.520860  717626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:37:01.536175  717626 pause.go:52] kubelet running: false
	I1115 10:37:01.536301  717626 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:37:01.760724  717626 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:37:01.760830  717626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:37:01.851484  717626 cri.go:89] found id: "de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558"
	I1115 10:37:01.851508  717626 cri.go:89] found id: "485037dfcd265044f912626ceea2d533281e3d74aeea571cd809b54553eccd15"
	I1115 10:37:01.851513  717626 cri.go:89] found id: "befdacbffc79f18ce39527c245ffbfb64f06c3603bc06289cea4dadfac5cbe3c"
	I1115 10:37:01.851517  717626 cri.go:89] found id: "7686787474fca52c2819c5885171525469c5859b47781117bc24b263d240bda7"
	I1115 10:37:01.851521  717626 cri.go:89] found id: "8d6ee23472f5075985606028a63174d0467fcc73a763b67418774407ccb028af"
	I1115 10:37:01.851525  717626 cri.go:89] found id: "6574dcaec8359c575d60f7f9b4b4a31ffe5a8ffe0a63577c96e18b02396872f9"
	I1115 10:37:01.851528  717626 cri.go:89] found id: "fb296649ab4b3d918ce7358336368732c13315596a50250e0c726940c17152bc"
	I1115 10:37:01.851532  717626 cri.go:89] found id: "f50de0346fbea41f639b33ec5f1eff63239807868eacf5aad15a6baeb58568df"
	I1115 10:37:01.851535  717626 cri.go:89] found id: "8c893770fdb03a4e37b1d08381d9addac2d7610c1a9489454b5c254477699b17"
	I1115 10:37:01.851541  717626 cri.go:89] found id: "edffc037317d2387077ef0776d6226248d3d851d0340387c56497d383eb1b924"
	I1115 10:37:01.851545  717626 cri.go:89] found id: "46685d2b8f35198cf7577a7610a7caf8d7d1fab9df4015f3cd9180ef344ac005"
	I1115 10:37:01.851548  717626 cri.go:89] found id: ""
	I1115 10:37:01.851633  717626 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:37:01.866646  717626 retry.go:31] will retry after 401.298092ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:01Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:37:02.268273  717626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:37:02.281946  717626 pause.go:52] kubelet running: false
	I1115 10:37:02.282051  717626 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:37:02.457314  717626 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:37:02.457431  717626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:37:02.535790  717626 cri.go:89] found id: "de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558"
	I1115 10:37:02.535813  717626 cri.go:89] found id: "485037dfcd265044f912626ceea2d533281e3d74aeea571cd809b54553eccd15"
	I1115 10:37:02.535818  717626 cri.go:89] found id: "befdacbffc79f18ce39527c245ffbfb64f06c3603bc06289cea4dadfac5cbe3c"
	I1115 10:37:02.535822  717626 cri.go:89] found id: "7686787474fca52c2819c5885171525469c5859b47781117bc24b263d240bda7"
	I1115 10:37:02.535826  717626 cri.go:89] found id: "8d6ee23472f5075985606028a63174d0467fcc73a763b67418774407ccb028af"
	I1115 10:37:02.535830  717626 cri.go:89] found id: "6574dcaec8359c575d60f7f9b4b4a31ffe5a8ffe0a63577c96e18b02396872f9"
	I1115 10:37:02.535833  717626 cri.go:89] found id: "fb296649ab4b3d918ce7358336368732c13315596a50250e0c726940c17152bc"
	I1115 10:37:02.535837  717626 cri.go:89] found id: "f50de0346fbea41f639b33ec5f1eff63239807868eacf5aad15a6baeb58568df"
	I1115 10:37:02.535840  717626 cri.go:89] found id: "8c893770fdb03a4e37b1d08381d9addac2d7610c1a9489454b5c254477699b17"
	I1115 10:37:02.535847  717626 cri.go:89] found id: "edffc037317d2387077ef0776d6226248d3d851d0340387c56497d383eb1b924"
	I1115 10:37:02.535850  717626 cri.go:89] found id: "46685d2b8f35198cf7577a7610a7caf8d7d1fab9df4015f3cd9180ef344ac005"
	I1115 10:37:02.535854  717626 cri.go:89] found id: ""
	I1115 10:37:02.535910  717626 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:37:02.550723  717626 out.go:203] 
	W1115 10:37:02.553409  717626 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:37:02.553433  717626 out.go:285] * 
	* 
	W1115 10:37:02.560890  717626 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:37:02.563506  717626 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-531596 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-531596
helpers_test.go:243: (dbg) docker inspect embed-certs-531596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c",
	        "Created": "2025-11-15T10:34:04.609645199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 711926,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:35:50.037648079Z",
	            "FinishedAt": "2025-11-15T10:35:49.213667879Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/hosts",
	        "LogPath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c-json.log",
	        "Name": "/embed-certs-531596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-531596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-531596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c",
	                "LowerDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-531596",
	                "Source": "/var/lib/docker/volumes/embed-certs-531596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-531596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-531596",
	                "name.minikube.sigs.k8s.io": "embed-certs-531596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b05973b08bba4c8d7976c0414b7830598dd6851c2c0feb1b8b75ae8cfe9b5997",
	            "SandboxKey": "/var/run/docker/netns/b05973b08bba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-531596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:ba:1e:2a:fe:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3f5512ac8a850c62b7a0512f3192588adf3870d53b8a37838ac0a556f7411b44",
	                    "EndpointID": "0c000c21605c0366b1a448c017b65cb2e89df24455ebb4dce64647a59041888d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-531596",
	                        "6743ffb16c2e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-531596 -n embed-certs-531596
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-531596 -n embed-certs-531596: exit status 2 (413.414998ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-531596 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-531596 logs -n 25: (1.838160302s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ image   │ old-k8s-version-448285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-448285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-845026       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-845026                                                                                                                                                                                                                     │ cert-expiration-845026       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ stop    │ -p no-preload-907610 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p no-preload-907610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-531596 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-907610 image list --format=json                                                                                                                                                                                                    │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-167523                                                                                                                                                                                                               │ disable-driver-mounts-167523 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ image   │ embed-certs-531596 image list --format=json                                                                                                                                                                                                   │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-531596 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:36:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:36:23.940649  715373 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:23.940895  715373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:23.940930  715373 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:23.940952  715373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:23.941234  715373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:36:23.941757  715373 out.go:368] Setting JSON to false
	I1115 10:36:23.942851  715373 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19135,"bootTime":1763183849,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:36:23.942963  715373 start.go:143] virtualization:  
	I1115 10:36:23.946837  715373 out.go:179] * [default-k8s-diff-port-303164] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:36:23.950987  715373 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:36:23.951086  715373 notify.go:221] Checking for updates...
	I1115 10:36:23.957346  715373 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:36:23.960322  715373 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:36:23.963322  715373 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:36:23.966342  715373 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:36:23.969284  715373 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:36:23.972798  715373 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:23.972909  715373 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:36:23.995850  715373 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:36:23.995975  715373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:24.075372  715373 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:36:24.066114381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:36:24.075494  715373 docker.go:319] overlay module found
	I1115 10:36:24.080696  715373 out.go:179] * Using the docker driver based on user configuration
	I1115 10:36:24.083618  715373 start.go:309] selected driver: docker
	I1115 10:36:24.083641  715373 start.go:930] validating driver "docker" against <nil>
	I1115 10:36:24.083663  715373 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:36:24.084387  715373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:24.145974  715373 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:36:24.136017933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:36:24.146131  715373 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:36:24.146366  715373 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:24.149273  715373 out.go:179] * Using Docker driver with root privileges
	I1115 10:36:24.152193  715373 cni.go:84] Creating CNI manager for ""
	I1115 10:36:24.152275  715373 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:24.152289  715373 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:36:24.152370  715373 start.go:353] cluster config:
	{Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:24.155466  715373 out.go:179] * Starting "default-k8s-diff-port-303164" primary control-plane node in "default-k8s-diff-port-303164" cluster
	I1115 10:36:24.158342  715373 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:36:24.161297  715373 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:36:24.164230  715373 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:24.164294  715373 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:36:24.164308  715373 cache.go:65] Caching tarball of preloaded images
	I1115 10:36:24.164311  715373 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:36:24.164410  715373 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:36:24.164421  715373 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:36:24.164539  715373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/config.json ...
	I1115 10:36:24.164566  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/config.json: {Name:mk4e1f1ef193ee3bbb131af9fa690974de571373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:24.182864  715373 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:36:24.182885  715373 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:36:24.182902  715373 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:36:24.182931  715373 start.go:360] acquireMachinesLock for default-k8s-diff-port-303164: {Name:mk83c2e290ad1c4cd9ca7124b1a50f58d94cf4bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:36:24.183039  715373 start.go:364] duration metric: took 86.275µs to acquireMachinesLock for "default-k8s-diff-port-303164"
	I1115 10:36:24.183070  715373 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:24.183140  715373 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:36:20.399844  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:22.887038  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:24.186611  715373 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:36:24.186842  715373 start.go:159] libmachine.API.Create for "default-k8s-diff-port-303164" (driver="docker")
	I1115 10:36:24.186885  715373 client.go:173] LocalClient.Create starting
	I1115 10:36:24.186976  715373 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem
	I1115 10:36:24.187014  715373 main.go:143] libmachine: Decoding PEM data...
	I1115 10:36:24.187034  715373 main.go:143] libmachine: Parsing certificate...
	I1115 10:36:24.187107  715373 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem
	I1115 10:36:24.187130  715373 main.go:143] libmachine: Decoding PEM data...
	I1115 10:36:24.187140  715373 main.go:143] libmachine: Parsing certificate...
	I1115 10:36:24.187509  715373 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:36:24.204280  715373 cli_runner.go:211] docker network inspect default-k8s-diff-port-303164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:36:24.204368  715373 network_create.go:284] running [docker network inspect default-k8s-diff-port-303164] to gather additional debugging logs...
	I1115 10:36:24.204391  715373 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303164
	W1115 10:36:24.219891  715373 cli_runner.go:211] docker network inspect default-k8s-diff-port-303164 returned with exit code 1
	I1115 10:36:24.219954  715373 network_create.go:287] error running [docker network inspect default-k8s-diff-port-303164]: docker network inspect default-k8s-diff-port-303164: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-303164 not found
	I1115 10:36:24.219968  715373 network_create.go:289] output of [docker network inspect default-k8s-diff-port-303164]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-303164 not found
	
	** /stderr **
	I1115 10:36:24.220069  715373 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:24.236334  715373 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-03fcaf6cb6bf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:21:e0:cf:fc:c1} reservation:<nil>}
	I1115 10:36:24.236678  715373 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a5248bd30780 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:00:a1:23:de:dd} reservation:<nil>}
	I1115 10:36:24.237021  715373 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae071823fd3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b9:7d:07:12:bf} reservation:<nil>}
	I1115 10:36:24.237287  715373 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3f5512ac8a85 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:2b:8b:6d:61:3d} reservation:<nil>}
	I1115 10:36:24.237718  715373 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a26aa0}
	I1115 10:36:24.237742  715373 network_create.go:124] attempt to create docker network default-k8s-diff-port-303164 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:36:24.237798  715373 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-303164 default-k8s-diff-port-303164
	I1115 10:36:24.294651  715373 network_create.go:108] docker network default-k8s-diff-port-303164 192.168.85.0/24 created
	I1115 10:36:24.294698  715373 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-303164" container
	I1115 10:36:24.294773  715373 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:36:24.311181  715373 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-303164 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303164 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:36:24.328972  715373 oci.go:103] Successfully created a docker volume default-k8s-diff-port-303164
	I1115 10:36:24.329065  715373 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-303164-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303164 --entrypoint /usr/bin/test -v default-k8s-diff-port-303164:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:36:24.906713  715373 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-303164
	I1115 10:36:24.906787  715373 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:24.906801  715373 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:36:24.906877  715373 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-303164:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1115 10:36:25.390986  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:27.395832  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:29.317380  715373 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-303164:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.41045684s)
	I1115 10:36:29.317421  715373 kic.go:203] duration metric: took 4.410615236s to extract preloaded images to volume ...
	W1115 10:36:29.317569  715373 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:36:29.317715  715373 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:36:29.372716  715373 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-303164 --name default-k8s-diff-port-303164 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303164 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-303164 --network default-k8s-diff-port-303164 --ip 192.168.85.2 --volume default-k8s-diff-port-303164:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:36:29.696425  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Running}}
	I1115 10:36:29.720923  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:36:29.747908  715373 cli_runner.go:164] Run: docker exec default-k8s-diff-port-303164 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:36:29.807647  715373 oci.go:144] the created container "default-k8s-diff-port-303164" has a running status.
	I1115 10:36:29.807674  715373 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa...
	I1115 10:36:30.366571  715373 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:36:30.396700  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:36:30.414403  715373 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:36:30.414430  715373 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-303164 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:36:30.461384  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:36:30.480831  715373 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:30.480919  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:30.498196  715373 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:30.498528  715373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 10:36:30.498537  715373 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:30.499189  715373 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60550->127.0.0.1:33809: read: connection reset by peer
	I1115 10:36:33.656960  715373 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303164
	
	I1115 10:36:33.656986  715373 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-303164"
	I1115 10:36:33.657057  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:33.674798  715373 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.675112  715373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 10:36:33.675130  715373 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-303164 && echo "default-k8s-diff-port-303164" | sudo tee /etc/hostname
	I1115 10:36:33.836186  715373 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303164
	
	I1115 10:36:33.836319  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:33.856072  715373 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.856402  715373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 10:36:33.856428  715373 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-303164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-303164/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-303164' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1115 10:36:29.887022  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:31.887261  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:34.406118  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:34.010366  715373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:34.010445  715373 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:36:34.010483  715373 ubuntu.go:190] setting up certificates
	I1115 10:36:34.010536  715373 provision.go:84] configureAuth start
	I1115 10:36:34.010629  715373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:36:34.029866  715373 provision.go:143] copyHostCerts
	I1115 10:36:34.029953  715373 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:36:34.029966  715373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:36:34.030066  715373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:36:34.030184  715373 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:36:34.030210  715373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:36:34.030255  715373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:36:34.030383  715373 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:36:34.030398  715373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:36:34.030443  715373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:36:34.030522  715373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-303164 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-303164 localhost minikube]
	I1115 10:36:34.704834  715373 provision.go:177] copyRemoteCerts
	I1115 10:36:34.704909  715373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:34.704950  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:34.722952  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:34.835090  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:34.854642  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:36:34.871070  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:34.890948  715373 provision.go:87] duration metric: took 880.374489ms to configureAuth
	I1115 10:36:34.890974  715373 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:34.891159  715373 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:34.891261  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:34.909852  715373 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:34.910177  715373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 10:36:34.910200  715373 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:35.263265  715373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:35.263285  715373 machine.go:97] duration metric: took 4.782435405s to provisionDockerMachine
	I1115 10:36:35.263295  715373 client.go:176] duration metric: took 11.076398209s to LocalClient.Create
	I1115 10:36:35.263315  715373 start.go:167] duration metric: took 11.076474285s to libmachine.API.Create "default-k8s-diff-port-303164"
	I1115 10:36:35.263324  715373 start.go:293] postStartSetup for "default-k8s-diff-port-303164" (driver="docker")
	I1115 10:36:35.263334  715373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:35.263393  715373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:35.263434  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:35.279835  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:35.392163  715373 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:35.395561  715373 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:35.395594  715373 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:35.395605  715373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:36:35.395660  715373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:36:35.395747  715373 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:36:35.395857  715373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:35.403094  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:36:35.421834  715373 start.go:296] duration metric: took 158.494755ms for postStartSetup
	I1115 10:36:35.422318  715373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:36:35.441635  715373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/config.json ...
	I1115 10:36:35.441908  715373 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:35.441957  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:35.459685  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:35.563453  715373 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:35.571560  715373 start.go:128] duration metric: took 11.388361267s to createHost
	I1115 10:36:35.571638  715373 start.go:83] releasing machines lock for "default-k8s-diff-port-303164", held for 11.388583825s
	I1115 10:36:35.571744  715373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:36:35.593493  715373 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:35.593550  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:35.593730  715373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:35.593785  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:35.625966  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:35.627458  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:35.823480  715373 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:35.830120  715373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:35.868416  715373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:35.872967  715373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:35.873039  715373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:35.906512  715373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:36:35.906534  715373 start.go:496] detecting cgroup driver to use...
	I1115 10:36:35.906565  715373 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:35.906620  715373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:35.924081  715373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:35.936920  715373 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:35.937012  715373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:35.954768  715373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:35.974859  715373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:36.106890  715373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:36.239618  715373 docker.go:234] disabling docker service ...
	I1115 10:36:36.239736  715373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:36.269369  715373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:36.286634  715373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:36.413761  715373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:36.546838  715373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:36.562013  715373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:36.576303  715373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:36.576396  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.585174  715373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:36.585267  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.594227  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.603724  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.613284  715373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:36.621730  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.630248  715373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.643607  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.653034  715373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:36.661225  715373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:36.668989  715373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:36.784224  715373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:36.914863  715373 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:36.914954  715373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:36.918738  715373 start.go:564] Will wait 60s for crictl version
	I1115 10:36:36.918825  715373 ssh_runner.go:195] Run: which crictl
	I1115 10:36:36.922525  715373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:36.950811  715373 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:36.950919  715373 ssh_runner.go:195] Run: crio --version
	I1115 10:36:36.982035  715373 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.014730  715373 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:37.018595  715373 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:37.042619  715373 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:37.047538  715373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.057836  715373 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:37.057955  715373 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:37.058015  715373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.092450  715373 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.092477  715373 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:37.092538  715373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.118240  715373 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.118266  715373 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:37.118274  715373 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:36:37.118388  715373 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-303164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:37.118515  715373 ssh_runner.go:195] Run: crio config
	I1115 10:36:37.182471  715373 cni.go:84] Creating CNI manager for ""
	I1115 10:36:37.182496  715373 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:37.182514  715373 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:36:37.182539  715373 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-303164 NodeName:default-k8s-diff-port-303164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:37.182675  715373 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-303164"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:37.182756  715373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:37.191404  715373 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:37.191526  715373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:37.199619  715373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:36:37.212996  715373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:37.226023  715373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:36:37.238592  715373 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:37.242056  715373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.251606  715373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.382675  715373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:37.410572  715373 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164 for IP: 192.168.85.2
	I1115 10:36:37.410595  715373 certs.go:195] generating shared ca certs ...
	I1115 10:36:37.410612  715373 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:37.410749  715373 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:36:37.410795  715373 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:36:37.410818  715373 certs.go:257] generating profile certs ...
	I1115 10:36:37.410874  715373 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.key
	I1115 10:36:37.410890  715373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt with IP's: []
	W1115 10:36:36.886206  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:38.888329  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:39.074156  715373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt ...
	I1115 10:36:39.074192  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: {Name:mke9b36e01aa6cd0d9145f828db40b3208979cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.074437  715373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.key ...
	I1115 10:36:39.074455  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.key: {Name:mkb2232a8d6420fddc4e3f1010a5e912748440f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.074564  715373 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336
	I1115 10:36:39.074585  715373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt.44e49336 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 10:36:39.239884  715373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt.44e49336 ...
	I1115 10:36:39.239913  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt.44e49336: {Name:mk54b60063234b6f4395363bb465a2f00ba81820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.240103  715373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336 ...
	I1115 10:36:39.240117  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336: {Name:mk125c5fe0265882b59c111dbe335b20a3b621cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.240205  715373 certs.go:382] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt.44e49336 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt
	I1115 10:36:39.240296  715373 certs.go:386] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key
	I1115 10:36:39.240385  715373 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key
	I1115 10:36:39.240419  715373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt with IP's: []
	I1115 10:36:39.470456  715373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt ...
	I1115 10:36:39.470490  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt: {Name:mkb6c6197b31aa8889bbe60f97da2fbe634c563a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.470691  715373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key ...
	I1115 10:36:39.470708  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key: {Name:mkd5d6d153af36887a360f58e3fc5e809bf7a416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.470900  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:36:39.470975  715373 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:39.470991  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:36:39.471024  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:39.471056  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:39.471086  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:36:39.471142  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:36:39.471766  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:39.492036  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:39.513345  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:39.531337  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:39.549518  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:36:39.566675  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:36:39.583597  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:39.601108  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:39.619449  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:39.636572  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:36:39.654376  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:36:39.671393  715373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:39.684260  715373 ssh_runner.go:195] Run: openssl version
	I1115 10:36:39.690637  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:36:39.698637  715373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:36:39.702188  715373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:36:39.702306  715373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:36:39.743366  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:39.752436  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:36:39.760834  715373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:36:39.764549  715373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:36:39.764609  715373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:36:39.805702  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:39.814122  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:39.822082  715373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:39.826426  715373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:39.826486  715373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:39.870388  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:39.878839  715373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:39.885014  715373 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:36:39.885066  715373 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:39.885138  715373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:39.885199  715373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:39.915863  715373 cri.go:89] found id: ""
	I1115 10:36:39.915977  715373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:39.923790  715373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:36:39.931500  715373 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:36:39.931586  715373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:36:39.939248  715373 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:36:39.939282  715373 kubeadm.go:158] found existing configuration files:
	
	I1115 10:36:39.939353  715373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1115 10:36:39.947167  715373 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:36:39.947287  715373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:36:39.954636  715373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1115 10:36:39.962500  715373 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:36:39.962568  715373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:36:39.970142  715373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1115 10:36:39.978087  715373 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:36:39.978196  715373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:36:39.985402  715373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1115 10:36:39.993073  715373 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:36:39.993146  715373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:36:40.000555  715373 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:36:40.068409  715373 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:36:40.068805  715373 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:36:40.099619  715373 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:36:40.099709  715373 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:36:40.099766  715373 kubeadm.go:319] OS: Linux
	I1115 10:36:40.099829  715373 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:36:40.099902  715373 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:36:40.099966  715373 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:36:40.100034  715373 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:36:40.100103  715373 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:36:40.100169  715373 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:36:40.100267  715373 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:36:40.100333  715373 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:36:40.100406  715373 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:36:40.174031  715373 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:36:40.174153  715373 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:36:40.174256  715373 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:36:40.181918  715373 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:36:40.187264  715373 out.go:252]   - Generating certificates and keys ...
	I1115 10:36:40.187394  715373 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:36:40.187481  715373 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:36:41.659976  715373 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:36:42.201676  715373 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:36:43.251428  715373 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:36:43.669962  715373 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1115 10:36:41.396584  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:43.888344  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:45.172855  715373 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:36:45.173036  715373 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-303164 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:36:45.432834  715373 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:36:45.433145  715373 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-303164 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:36:45.557728  715373 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:36:45.892751  715373 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:36:45.977681  715373 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:36:45.977775  715373 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:36:46.135212  715373 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:36:46.577307  715373 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:36:47.282709  715373 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:36:47.916296  715373 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:36:48.038243  715373 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:36:48.039117  715373 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:36:48.042004  715373 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1115 10:36:46.390330  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:46.886761  711801 pod_ready.go:94] pod "coredns-66bc5c9577-sl29r" is "Ready"
	I1115 10:36:46.886785  711801 pod_ready.go:86] duration metric: took 41.505728819s for pod "coredns-66bc5c9577-sl29r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.889433  711801 pod_ready.go:83] waiting for pod "etcd-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.894042  711801 pod_ready.go:94] pod "etcd-embed-certs-531596" is "Ready"
	I1115 10:36:46.894067  711801 pod_ready.go:86] duration metric: took 4.581066ms for pod "etcd-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.896399  711801 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.901264  711801 pod_ready.go:94] pod "kube-apiserver-embed-certs-531596" is "Ready"
	I1115 10:36:46.901294  711801 pod_ready.go:86] duration metric: took 4.875794ms for pod "kube-apiserver-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.904008  711801 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:47.086303  711801 pod_ready.go:94] pod "kube-controller-manager-embed-certs-531596" is "Ready"
	I1115 10:36:47.086380  711801 pod_ready.go:86] duration metric: took 182.310729ms for pod "kube-controller-manager-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:47.286380  711801 pod_ready.go:83] waiting for pod "kube-proxy-nqfl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:47.686869  711801 pod_ready.go:94] pod "kube-proxy-nqfl8" is "Ready"
	I1115 10:36:47.686962  711801 pod_ready.go:86] duration metric: took 400.507227ms for pod "kube-proxy-nqfl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:47.886270  711801 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:48.286147  711801 pod_ready.go:94] pod "kube-scheduler-embed-certs-531596" is "Ready"
	I1115 10:36:48.286170  711801 pod_ready.go:86] duration metric: took 399.869411ms for pod "kube-scheduler-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:48.286182  711801 pod_ready.go:40] duration metric: took 42.971788607s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:48.353259  711801 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:36:48.358693  711801 out.go:179] * Done! kubectl is now configured to use "embed-certs-531596" cluster and "default" namespace by default
	I1115 10:36:48.045439  715373 out.go:252]   - Booting up control plane ...
	I1115 10:36:48.045554  715373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:36:48.045663  715373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:36:48.045736  715373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:36:48.074402  715373 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:36:48.074540  715373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:36:48.083300  715373 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:36:48.086929  715373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:36:48.086984  715373 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:36:48.217010  715373 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:36:48.217135  715373 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:36:50.717495  715373 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.500787028s
	I1115 10:36:50.723516  715373 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:36:50.723616  715373 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1115 10:36:50.724018  715373 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:36:50.724114  715373 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:36:53.756222  715373 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.03224047s
	I1115 10:36:56.064181  715373 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.340629501s
	I1115 10:36:57.725867  715373 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002031848s
	I1115 10:36:57.749402  715373 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:36:57.764771  715373 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:36:57.777841  715373 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:36:57.779091  715373 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-303164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:36:57.794817  715373 kubeadm.go:319] [bootstrap-token] Using token: ws59ug.oi7p2vpap16xq9au
	I1115 10:36:57.797748  715373 out.go:252]   - Configuring RBAC rules ...
	I1115 10:36:57.797912  715373 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:36:57.802922  715373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:36:57.811261  715373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:36:57.819069  715373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:36:57.823389  715373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:36:57.829926  715373 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:36:58.133725  715373 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:36:58.598884  715373 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:36:59.133512  715373 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:36:59.134888  715373 kubeadm.go:319] 
	I1115 10:36:59.134967  715373 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:36:59.134973  715373 kubeadm.go:319] 
	I1115 10:36:59.135070  715373 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:36:59.135076  715373 kubeadm.go:319] 
	I1115 10:36:59.135102  715373 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:36:59.135164  715373 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:36:59.135216  715373 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:36:59.135221  715373 kubeadm.go:319] 
	I1115 10:36:59.135277  715373 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:36:59.135282  715373 kubeadm.go:319] 
	I1115 10:36:59.135331  715373 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:36:59.135340  715373 kubeadm.go:319] 
	I1115 10:36:59.135394  715373 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:36:59.135473  715373 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:36:59.135544  715373 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:36:59.135548  715373 kubeadm.go:319] 
	I1115 10:36:59.135636  715373 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:36:59.135715  715373 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:36:59.135719  715373 kubeadm.go:319] 
	I1115 10:36:59.135806  715373 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token ws59ug.oi7p2vpap16xq9au \
	I1115 10:36:59.135914  715373 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 10:36:59.135935  715373 kubeadm.go:319] 	--control-plane 
	I1115 10:36:59.135940  715373 kubeadm.go:319] 
	I1115 10:36:59.136028  715373 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:36:59.136032  715373 kubeadm.go:319] 
	I1115 10:36:59.136117  715373 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token ws59ug.oi7p2vpap16xq9au \
	I1115 10:36:59.136224  715373 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 10:36:59.139532  715373 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:36:59.139770  715373 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:36:59.139884  715373 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:36:59.139903  715373 cni.go:84] Creating CNI manager for ""
	I1115 10:36:59.139911  715373 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:59.143049  715373 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.524728803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8a5137b2-7dcd-4403-b56a-30f57c338271 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.525642378Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5693ac81-93c6-4288-bd66-958fa68503cf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.525760413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.533819274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.533997443Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/618e3a8e331c3170a16c15f2c79a6760356c3d5ebb31fabcfcebe2927f67e8e1/merged/etc/passwd: no such file or directory"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.534026168Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/618e3a8e331c3170a16c15f2c79a6760356c3d5ebb31fabcfcebe2927f67e8e1/merged/etc/group: no such file or directory"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.534293878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.554656567Z" level=info msg="Created container de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558: kube-system/storage-provisioner/storage-provisioner" id=5693ac81-93c6-4288-bd66-958fa68503cf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.555573596Z" level=info msg="Starting container: de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558" id=dfec9ad3-82a3-48b1-9735-db7ec93b283f name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.558316511Z" level=info msg="Started container" PID=1628 containerID=de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558 description=kube-system/storage-provisioner/storage-provisioner id=dfec9ad3-82a3-48b1-9735-db7ec93b283f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ce71c4e76d114f0878f7b4ef2ba4fd843f8b9230da057699884db712db56c6a
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.228068676Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.246646783Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.247105117Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.247189546Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.253824667Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.254062256Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.254205055Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.257902937Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.258071983Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.258150101Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.261788999Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.261950611Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.262038969Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.273920804Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.27410274Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	de1e8a6e238b0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   8ce71c4e76d11       storage-provisioner                          kube-system
	edffc037317d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   eeb215851413b       dashboard-metrics-scraper-6ffb444bf9-tdd78   kubernetes-dashboard
	46685d2b8f351       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   aad02c1ad8739       kubernetes-dashboard-855c9754f9-57w6h        kubernetes-dashboard
	485037dfcd265       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   1a813df1d1200       coredns-66bc5c9577-sl29r                     kube-system
	614e61f01d657       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   84d97811f972f       busybox                                      default
	befdacbffc79f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   5fdbe0db09f02       kube-proxy-nqfl8                             kube-system
	7686787474fca       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   b43c68c790d64       kindnet-9pzmc                                kube-system
	8d6ee23472f50       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   8ce71c4e76d11       storage-provisioner                          kube-system
	6574dcaec8359       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   bdca84735e1fb       kube-apiserver-embed-certs-531596            kube-system
	fb296649ab4b3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   647c9b263ac44       kube-controller-manager-embed-certs-531596   kube-system
	f50de0346fbea       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7e1e908600d96       kube-scheduler-embed-certs-531596            kube-system
	8c893770fdb03       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4afb85aea0808       etcd-embed-certs-531596                      kube-system
	
	
	==> coredns [485037dfcd265044f912626ceea2d533281e3d74aeea571cd809b54553eccd15] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33230 - 530 "HINFO IN 4959120970874875679.3411899866601838677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00579494s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-531596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-531596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=embed-certs-531596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-531596
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:36:34 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:36:34 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:36:34 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:36:34 +0000   Sat, 15 Nov 2025 10:35:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-531596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                86513864-e880-4a89-8b90-c692d6bc7e85
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-sl29r                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-embed-certs-531596                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-9pzmc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-embed-certs-531596             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-embed-certs-531596    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-nqfl8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-embed-certs-531596             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tdd78    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-57w6h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node embed-certs-531596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node embed-certs-531596 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m24s                  node-controller  Node embed-certs-531596 event: Registered Node embed-certs-531596 in Controller
	  Normal   NodeReady                102s                   kubelet          Node embed-certs-531596 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node embed-certs-531596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node embed-certs-531596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node embed-certs-531596 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node embed-certs-531596 event: Registered Node embed-certs-531596 in Controller
	
	
	==> dmesg <==
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	[Nov15 10:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8c893770fdb03a4e37b1d08381d9addac2d7610c1a9489454b5c254477699b17] <==
	{"level":"warn","ts":"2025-11-15T10:36:00.750777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.784578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.831478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.870070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.938540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.042957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.080706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.126227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.150577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.202817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.239684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.283948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.327453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.378828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.426777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.465847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.479142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.516747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.548251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.589328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.620583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.679492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.708710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.776378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.847950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38436","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:04 up  5:19,  0 user,  load average: 4.08, 3.60, 3.00
	Linux embed-certs-531596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7686787474fca52c2819c5885171525469c5859b47781117bc24b263d240bda7] <==
	I1115 10:36:05.016170       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:36:05.016504       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:36:05.016626       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:36:05.016646       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:36:05.016668       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:36:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:36:05.224101       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:36:05.224133       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:36:05.224142       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:36:05.227835       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:36:35.224453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:36:35.224578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:36:35.228648       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:36:35.228647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:36:36.924567       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:36:36.924667       1 metrics.go:72] Registering metrics
	I1115 10:36:36.924776       1 controller.go:711] "Syncing nftables rules"
	I1115 10:36:45.226129       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:36:45.226591       1 main.go:301] handling current node
	I1115 10:36:55.229662       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:36:55.229698       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6574dcaec8359c575d60f7f9b4b4a31ffe5a8ffe0a63577c96e18b02396872f9] <==
	I1115 10:36:03.407061       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:36:03.407083       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:36:03.408972       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:36:03.414167       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:36:03.414277       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:36:03.414457       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:36:03.433683       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:36:03.456523       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:36:03.466520       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:36:03.466585       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:36:03.466662       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:36:03.466671       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:36:03.466678       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:36:03.466683       1 cache.go:39] Caches are synced for autoregister controller
	E1115 10:36:03.481338       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:36:03.838042       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:04.230485       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:36:04.320395       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:04.367752       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:04.399315       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:04.573446       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.142.33"}
	I1115 10:36:04.592276       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.250.191"}
	I1115 10:36:06.902060       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:06.951954       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:36:07.003184       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fb296649ab4b3d918ce7358336368732c13315596a50250e0c726940c17152bc] <==
	I1115 10:36:06.497443       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:36:06.504202       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:36:06.506092       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:36:06.507333       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:06.507355       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:36:06.507363       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:36:06.507475       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:06.507643       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:36:06.511911       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:36:06.515936       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:36:06.526609       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:36:06.531915       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:36:06.534383       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:36:06.538973       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:36:06.540129       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:36:06.542428       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:36:06.543474       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:36:06.546433       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:36:06.546484       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:36:06.546522       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:36:06.546572       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:36:06.546643       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:36:06.546691       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:36:06.555846       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:06.568106       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [befdacbffc79f18ce39527c245ffbfb64f06c3603bc06289cea4dadfac5cbe3c] <==
	I1115 10:36:05.435332       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:36:05.659928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:36:05.770209       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:36:05.770326       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:36:05.770441       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:36:05.830738       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:36:05.830804       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:36:05.841131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:36:05.841536       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:36:05.841864       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:05.845946       1 config.go:200] "Starting service config controller"
	I1115 10:36:05.845999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:36:05.846041       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:36:05.846068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:36:05.846129       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:36:05.846158       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:36:05.846812       1 config.go:309] "Starting node config controller"
	I1115 10:36:05.849257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:36:05.849340       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:36:05.946688       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:36:05.946731       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:36:05.946704       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f50de0346fbea41f639b33ec5f1eff63239807868eacf5aad15a6baeb58568df] <==
	I1115 10:36:03.359564       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:36:05.794498       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:36:05.794533       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:05.801872       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:36:05.802578       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:36:05.806020       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:36:05.802707       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:36:05.802968       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:05.806301       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:05.803077       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:05.806343       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:05.906495       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:05.906519       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:36:05.906541       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:36:07 embed-certs-531596 kubelet[770]: I1115 10:36:07.199290     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1ba49be7-6120-447b-a77f-a5167b5c87ad-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-tdd78\" (UID: \"1ba49be7-6120-447b-a77f-a5167b5c87ad\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78"
	Nov 15 10:36:07 embed-certs-531596 kubelet[770]: I1115 10:36:07.199314     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b33d94a0-d2c6-4220-b732-0427f005a96c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-57w6h\" (UID: \"b33d94a0-d2c6-4220-b732-0427f005a96c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-57w6h"
	Nov 15 10:36:07 embed-certs-531596 kubelet[770]: I1115 10:36:07.199334     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlpk\" (UniqueName: \"kubernetes.io/projected/b33d94a0-d2c6-4220-b732-0427f005a96c-kube-api-access-2rlpk\") pod \"kubernetes-dashboard-855c9754f9-57w6h\" (UID: \"b33d94a0-d2c6-4220-b732-0427f005a96c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-57w6h"
	Nov 15 10:36:08 embed-certs-531596 kubelet[770]: W1115 10:36:08.417799     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/crio-aad02c1ad8739cfd937bc41317299156a30b56b966500556949a0dd770c1c08f WatchSource:0}: Error finding container aad02c1ad8739cfd937bc41317299156a30b56b966500556949a0dd770c1c08f: Status 404 returned error can't find the container with id aad02c1ad8739cfd937bc41317299156a30b56b966500556949a0dd770c1c08f
	Nov 15 10:36:13 embed-certs-531596 kubelet[770]: I1115 10:36:13.447088     770 scope.go:117] "RemoveContainer" containerID="2ddbe08ebaee826748cbcd8b10579faa231491c63663966470dc7a9996a0815b"
	Nov 15 10:36:14 embed-certs-531596 kubelet[770]: I1115 10:36:14.453696     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:14 embed-certs-531596 kubelet[770]: E1115 10:36:14.453871     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:14 embed-certs-531596 kubelet[770]: I1115 10:36:14.456329     770 scope.go:117] "RemoveContainer" containerID="2ddbe08ebaee826748cbcd8b10579faa231491c63663966470dc7a9996a0815b"
	Nov 15 10:36:15 embed-certs-531596 kubelet[770]: I1115 10:36:15.457499     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:15 embed-certs-531596 kubelet[770]: E1115 10:36:15.458285     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:18 embed-certs-531596 kubelet[770]: I1115 10:36:18.345179     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:18 embed-certs-531596 kubelet[770]: E1115 10:36:18.345353     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:20 embed-certs-531596 kubelet[770]: I1115 10:36:20.500332     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-57w6h" podStartSLOduration=2.024294541 podStartE2EDuration="13.500304754s" podCreationTimestamp="2025-11-15 10:36:07 +0000 UTC" firstStartedPulling="2025-11-15 10:36:08.422349722 +0000 UTC m=+11.324115074" lastFinishedPulling="2025-11-15 10:36:19.898359935 +0000 UTC m=+22.800125287" observedRunningTime="2025-11-15 10:36:20.498575407 +0000 UTC m=+23.400340776" watchObservedRunningTime="2025-11-15 10:36:20.500304754 +0000 UTC m=+23.402070106"
	Nov 15 10:36:33 embed-certs-531596 kubelet[770]: I1115 10:36:33.310516     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:33 embed-certs-531596 kubelet[770]: I1115 10:36:33.514750     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:33 embed-certs-531596 kubelet[770]: I1115 10:36:33.516476     770 scope.go:117] "RemoveContainer" containerID="edffc037317d2387077ef0776d6226248d3d851d0340387c56497d383eb1b924"
	Nov 15 10:36:33 embed-certs-531596 kubelet[770]: E1115 10:36:33.517893     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:35 embed-certs-531596 kubelet[770]: I1115 10:36:35.522971     770 scope.go:117] "RemoveContainer" containerID="8d6ee23472f5075985606028a63174d0467fcc73a763b67418774407ccb028af"
	Nov 15 10:36:38 embed-certs-531596 kubelet[770]: I1115 10:36:38.345389     770 scope.go:117] "RemoveContainer" containerID="edffc037317d2387077ef0776d6226248d3d851d0340387c56497d383eb1b924"
	Nov 15 10:36:38 embed-certs-531596 kubelet[770]: E1115 10:36:38.345578     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:52 embed-certs-531596 kubelet[770]: I1115 10:36:52.309815     770 scope.go:117] "RemoveContainer" containerID="edffc037317d2387077ef0776d6226248d3d851d0340387c56497d383eb1b924"
	Nov 15 10:36:52 embed-certs-531596 kubelet[770]: E1115 10:36:52.310468     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:37:00 embed-certs-531596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:37:01 embed-certs-531596 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:37:01 embed-certs-531596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [46685d2b8f35198cf7577a7610a7caf8d7d1fab9df4015f3cd9180ef344ac005] <==
	2025/11/15 10:36:19 Using namespace: kubernetes-dashboard
	2025/11/15 10:36:19 Using in-cluster config to connect to apiserver
	2025/11/15 10:36:19 Using secret token for csrf signing
	2025/11/15 10:36:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:36:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:36:20 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:36:20 Generating JWE encryption key
	2025/11/15 10:36:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:36:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:36:20 Initializing JWE encryption key from synchronized object
	2025/11/15 10:36:20 Creating in-cluster Sidecar client
	2025/11/15 10:36:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:36:20 Serving insecurely on HTTP port: 9090
	2025/11/15 10:36:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:36:19 Starting overwatch
	
	
	==> storage-provisioner [8d6ee23472f5075985606028a63174d0467fcc73a763b67418774407ccb028af] <==
	I1115 10:36:04.895274       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:36:34.897676       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558] <==
	I1115 10:36:35.595631       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:36:35.595774       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:36:35.606933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:39.063204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:43.325297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:46.923770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:49.977696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:52.999803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:53.009521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:53.009709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:36:53.009914       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-531596_c3000c07-475f-4656-ba6e-526de0c452fd!
	I1115 10:36:53.010863       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"50582e2f-871b-4ae3-bc92-dc6483b1130c", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-531596_c3000c07-475f-4656-ba6e-526de0c452fd became leader
	W1115 10:36:53.024625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:53.040829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:53.110298       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-531596_c3000c07-475f-4656-ba6e-526de0c452fd!
	W1115 10:36:55.043590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:55.048444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:57.051717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:57.056584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:59.059837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:59.067253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:01.070509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:01.075893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:03.083092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:03.088472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-531596 -n embed-certs-531596
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-531596 -n embed-certs-531596: exit status 2 (539.108991ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-531596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-531596
helpers_test.go:243: (dbg) docker inspect embed-certs-531596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c",
	        "Created": "2025-11-15T10:34:04.609645199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 711926,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:35:50.037648079Z",
	            "FinishedAt": "2025-11-15T10:35:49.213667879Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/hosts",
	        "LogPath": "/var/lib/docker/containers/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c-json.log",
	        "Name": "/embed-certs-531596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-531596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-531596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c",
	                "LowerDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e85719fc9d493b29220ee52f09c0410e1c89963857c9967add99ed0a19cdfb68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-531596",
	                "Source": "/var/lib/docker/volumes/embed-certs-531596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-531596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-531596",
	                "name.minikube.sigs.k8s.io": "embed-certs-531596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b05973b08bba4c8d7976c0414b7830598dd6851c2c0feb1b8b75ae8cfe9b5997",
	            "SandboxKey": "/var/run/docker/netns/b05973b08bba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-531596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:ba:1e:2a:fe:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3f5512ac8a850c62b7a0512f3192588adf3870d53b8a37838ac0a556f7411b44",
	                    "EndpointID": "0c000c21605c0366b1a448c017b65cb2e89df24455ebb4dce64647a59041888d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-531596",
	                        "6743ffb16c2e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-531596 -n embed-certs-531596
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-531596 -n embed-certs-531596: exit status 2 (490.129426ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-531596 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-531596 logs -n 25: (1.298768871s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:32 UTC │ 15 Nov 25 10:33 UTC │
	│ image   │ old-k8s-version-448285 image list --format=json                                                                                                                                                                                               │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ pause   │ -p old-k8s-version-448285 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │                     │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-845026       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-845026                                                                                                                                                                                                                     │ cert-expiration-845026       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ stop    │ -p no-preload-907610 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p no-preload-907610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-531596 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-907610 image list --format=json                                                                                                                                                                                                    │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-167523                                                                                                                                                                                                               │ disable-driver-mounts-167523 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ image   │ embed-certs-531596 image list --format=json                                                                                                                                                                                                   │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-531596 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:36:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:36:23.940649  715373 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:36:23.940895  715373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:23.940930  715373 out.go:374] Setting ErrFile to fd 2...
	I1115 10:36:23.940952  715373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:36:23.941234  715373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:36:23.941757  715373 out.go:368] Setting JSON to false
	I1115 10:36:23.942851  715373 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19135,"bootTime":1763183849,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:36:23.942963  715373 start.go:143] virtualization:  
	I1115 10:36:23.946837  715373 out.go:179] * [default-k8s-diff-port-303164] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:36:23.950987  715373 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:36:23.951086  715373 notify.go:221] Checking for updates...
	I1115 10:36:23.957346  715373 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:36:23.960322  715373 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:36:23.963322  715373 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:36:23.966342  715373 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:36:23.969284  715373 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:36:23.972798  715373 config.go:182] Loaded profile config "embed-certs-531596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:23.972909  715373 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:36:23.995850  715373 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:36:23.995975  715373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:24.075372  715373 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:36:24.066114381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:36:24.075494  715373 docker.go:319] overlay module found
	I1115 10:36:24.080696  715373 out.go:179] * Using the docker driver based on user configuration
	I1115 10:36:24.083618  715373 start.go:309] selected driver: docker
	I1115 10:36:24.083641  715373 start.go:930] validating driver "docker" against <nil>
	I1115 10:36:24.083663  715373 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:36:24.084387  715373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:36:24.145974  715373 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:36:24.136017933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:36:24.146131  715373 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:36:24.146366  715373 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:36:24.149273  715373 out.go:179] * Using Docker driver with root privileges
	I1115 10:36:24.152193  715373 cni.go:84] Creating CNI manager for ""
	I1115 10:36:24.152275  715373 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:24.152289  715373 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:36:24.152370  715373 start.go:353] cluster config:
	{Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:24.155466  715373 out.go:179] * Starting "default-k8s-diff-port-303164" primary control-plane node in "default-k8s-diff-port-303164" cluster
	I1115 10:36:24.158342  715373 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:36:24.161297  715373 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:36:24.164230  715373 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:24.164294  715373 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:36:24.164308  715373 cache.go:65] Caching tarball of preloaded images
	I1115 10:36:24.164311  715373 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:36:24.164410  715373 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:36:24.164421  715373 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:36:24.164539  715373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/config.json ...
	I1115 10:36:24.164566  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/config.json: {Name:mk4e1f1ef193ee3bbb131af9fa690974de571373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:24.182864  715373 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:36:24.182885  715373 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:36:24.182902  715373 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:36:24.182931  715373 start.go:360] acquireMachinesLock for default-k8s-diff-port-303164: {Name:mk83c2e290ad1c4cd9ca7124b1a50f58d94cf4bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:36:24.183039  715373 start.go:364] duration metric: took 86.275µs to acquireMachinesLock for "default-k8s-diff-port-303164"
	I1115 10:36:24.183070  715373 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:36:24.183140  715373 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:36:20.399844  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:22.887038  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:24.186611  715373 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:36:24.186842  715373 start.go:159] libmachine.API.Create for "default-k8s-diff-port-303164" (driver="docker")
	I1115 10:36:24.186885  715373 client.go:173] LocalClient.Create starting
	I1115 10:36:24.186976  715373 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem
	I1115 10:36:24.187014  715373 main.go:143] libmachine: Decoding PEM data...
	I1115 10:36:24.187034  715373 main.go:143] libmachine: Parsing certificate...
	I1115 10:36:24.187107  715373 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem
	I1115 10:36:24.187130  715373 main.go:143] libmachine: Decoding PEM data...
	I1115 10:36:24.187140  715373 main.go:143] libmachine: Parsing certificate...
	I1115 10:36:24.187509  715373 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:36:24.204280  715373 cli_runner.go:211] docker network inspect default-k8s-diff-port-303164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:36:24.204368  715373 network_create.go:284] running [docker network inspect default-k8s-diff-port-303164] to gather additional debugging logs...
	I1115 10:36:24.204391  715373 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303164
	W1115 10:36:24.219891  715373 cli_runner.go:211] docker network inspect default-k8s-diff-port-303164 returned with exit code 1
	I1115 10:36:24.219954  715373 network_create.go:287] error running [docker network inspect default-k8s-diff-port-303164]: docker network inspect default-k8s-diff-port-303164: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-303164 not found
	I1115 10:36:24.219968  715373 network_create.go:289] output of [docker network inspect default-k8s-diff-port-303164]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-303164 not found
	
	** /stderr **
	I1115 10:36:24.220069  715373 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:24.236334  715373 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-03fcaf6cb6bf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:21:e0:cf:fc:c1} reservation:<nil>}
	I1115 10:36:24.236678  715373 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a5248bd30780 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:00:a1:23:de:dd} reservation:<nil>}
	I1115 10:36:24.237021  715373 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae071823fd3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b9:7d:07:12:bf} reservation:<nil>}
	I1115 10:36:24.237287  715373 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3f5512ac8a85 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:2b:8b:6d:61:3d} reservation:<nil>}
	I1115 10:36:24.237718  715373 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a26aa0}
	I1115 10:36:24.237742  715373 network_create.go:124] attempt to create docker network default-k8s-diff-port-303164 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:36:24.237798  715373 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-303164 default-k8s-diff-port-303164
	I1115 10:36:24.294651  715373 network_create.go:108] docker network default-k8s-diff-port-303164 192.168.85.0/24 created
	I1115 10:36:24.294698  715373 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-303164" container
	I1115 10:36:24.294773  715373 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:36:24.311181  715373 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-303164 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303164 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:36:24.328972  715373 oci.go:103] Successfully created a docker volume default-k8s-diff-port-303164
	I1115 10:36:24.329065  715373 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-303164-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303164 --entrypoint /usr/bin/test -v default-k8s-diff-port-303164:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:36:24.906713  715373 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-303164
	I1115 10:36:24.906787  715373 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:24.906801  715373 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:36:24.906877  715373 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-303164:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1115 10:36:25.390986  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:27.395832  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:29.317380  715373 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-303164:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.41045684s)
	I1115 10:36:29.317421  715373 kic.go:203] duration metric: took 4.410615236s to extract preloaded images to volume ...
	W1115 10:36:29.317569  715373 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:36:29.317715  715373 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:36:29.372716  715373 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-303164 --name default-k8s-diff-port-303164 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-303164 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-303164 --network default-k8s-diff-port-303164 --ip 192.168.85.2 --volume default-k8s-diff-port-303164:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:36:29.696425  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Running}}
	I1115 10:36:29.720923  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:36:29.747908  715373 cli_runner.go:164] Run: docker exec default-k8s-diff-port-303164 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:36:29.807647  715373 oci.go:144] the created container "default-k8s-diff-port-303164" has a running status.
	I1115 10:36:29.807674  715373 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa...
	I1115 10:36:30.366571  715373 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:36:30.396700  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:36:30.414403  715373 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:36:30.414430  715373 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-303164 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:36:30.461384  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:36:30.480831  715373 machine.go:94] provisionDockerMachine start ...
	I1115 10:36:30.480919  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:30.498196  715373 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:30.498528  715373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 10:36:30.498537  715373 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:36:30.499189  715373 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60550->127.0.0.1:33809: read: connection reset by peer
	I1115 10:36:33.656960  715373 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303164
	
	I1115 10:36:33.656986  715373 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-303164"
	I1115 10:36:33.657057  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:33.674798  715373 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.675112  715373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 10:36:33.675130  715373 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-303164 && echo "default-k8s-diff-port-303164" | sudo tee /etc/hostname
	I1115 10:36:33.836186  715373 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303164
	
	I1115 10:36:33.836319  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:33.856072  715373 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:33.856402  715373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 10:36:33.856428  715373 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-303164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-303164/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-303164' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1115 10:36:29.887022  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:31.887261  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:34.406118  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:34.010366  715373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:36:34.010445  715373 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:36:34.010483  715373 ubuntu.go:190] setting up certificates
	I1115 10:36:34.010536  715373 provision.go:84] configureAuth start
	I1115 10:36:34.010629  715373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:36:34.029866  715373 provision.go:143] copyHostCerts
	I1115 10:36:34.029953  715373 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:36:34.029966  715373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:36:34.030066  715373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:36:34.030184  715373 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:36:34.030210  715373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:36:34.030255  715373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:36:34.030383  715373 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:36:34.030398  715373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:36:34.030443  715373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:36:34.030522  715373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-303164 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-303164 localhost minikube]
	I1115 10:36:34.704834  715373 provision.go:177] copyRemoteCerts
	I1115 10:36:34.704909  715373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:36:34.704950  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:34.722952  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:34.835090  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:36:34.854642  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:36:34.871070  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:36:34.890948  715373 provision.go:87] duration metric: took 880.374489ms to configureAuth
	I1115 10:36:34.890974  715373 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:36:34.891159  715373 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:36:34.891261  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:34.909852  715373 main.go:143] libmachine: Using SSH client type: native
	I1115 10:36:34.910177  715373 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33809 <nil> <nil>}
	I1115 10:36:34.910200  715373 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:36:35.263265  715373 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:36:35.263285  715373 machine.go:97] duration metric: took 4.782435405s to provisionDockerMachine
	I1115 10:36:35.263295  715373 client.go:176] duration metric: took 11.076398209s to LocalClient.Create
	I1115 10:36:35.263315  715373 start.go:167] duration metric: took 11.076474285s to libmachine.API.Create "default-k8s-diff-port-303164"
	I1115 10:36:35.263324  715373 start.go:293] postStartSetup for "default-k8s-diff-port-303164" (driver="docker")
	I1115 10:36:35.263334  715373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:36:35.263393  715373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:36:35.263434  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:35.279835  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:35.392163  715373 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:36:35.395561  715373 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:36:35.395594  715373 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:36:35.395605  715373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:36:35.395660  715373 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:36:35.395747  715373 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:36:35.395857  715373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:36:35.403094  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:36:35.421834  715373 start.go:296] duration metric: took 158.494755ms for postStartSetup
	I1115 10:36:35.422318  715373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:36:35.441635  715373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/config.json ...
	I1115 10:36:35.441908  715373 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:36:35.441957  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:35.459685  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:35.563453  715373 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:36:35.571560  715373 start.go:128] duration metric: took 11.388361267s to createHost
	I1115 10:36:35.571638  715373 start.go:83] releasing machines lock for "default-k8s-diff-port-303164", held for 11.388583825s
	I1115 10:36:35.571744  715373 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:36:35.593493  715373 ssh_runner.go:195] Run: cat /version.json
	I1115 10:36:35.593550  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:35.593730  715373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:36:35.593785  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:36:35.625966  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:35.627458  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:36:35.823480  715373 ssh_runner.go:195] Run: systemctl --version
	I1115 10:36:35.830120  715373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:36:35.868416  715373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:36:35.872967  715373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:36:35.873039  715373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:36:35.906512  715373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:36:35.906534  715373 start.go:496] detecting cgroup driver to use...
	I1115 10:36:35.906565  715373 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:36:35.906620  715373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:36:35.924081  715373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:36:35.936920  715373 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:36:35.937012  715373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:36:35.954768  715373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:36:35.974859  715373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:36:36.106890  715373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:36:36.239618  715373 docker.go:234] disabling docker service ...
	I1115 10:36:36.239736  715373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:36:36.269369  715373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:36:36.286634  715373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:36:36.413761  715373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:36:36.546838  715373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:36:36.562013  715373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:36:36.576303  715373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:36:36.576396  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.585174  715373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:36:36.585267  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.594227  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.603724  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.613284  715373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:36:36.621730  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.630248  715373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.643607  715373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:36:36.653034  715373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:36:36.661225  715373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:36:36.668989  715373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:36.784224  715373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:36:36.914863  715373 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:36:36.914954  715373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:36:36.918738  715373 start.go:564] Will wait 60s for crictl version
	I1115 10:36:36.918825  715373 ssh_runner.go:195] Run: which crictl
	I1115 10:36:36.922525  715373 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:36:36.950811  715373 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:36:36.950919  715373 ssh_runner.go:195] Run: crio --version
	I1115 10:36:36.982035  715373 ssh_runner.go:195] Run: crio --version
	I1115 10:36:37.014730  715373 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:36:37.018595  715373 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:36:37.042619  715373 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:36:37.047538  715373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.057836  715373 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:36:37.057955  715373 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:36:37.058015  715373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.092450  715373 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.092477  715373 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:36:37.092538  715373 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:36:37.118240  715373 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:36:37.118266  715373 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:36:37.118274  715373 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:36:37.118388  715373 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-303164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:36:37.118515  715373 ssh_runner.go:195] Run: crio config
	I1115 10:36:37.182471  715373 cni.go:84] Creating CNI manager for ""
	I1115 10:36:37.182496  715373 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:37.182514  715373 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:36:37.182539  715373 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-303164 NodeName:default-k8s-diff-port-303164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:36:37.182675  715373 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-303164"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:36:37.182756  715373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:36:37.191404  715373 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:36:37.191526  715373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:36:37.199619  715373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:36:37.212996  715373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:36:37.226023  715373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:36:37.238592  715373 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:36:37.242056  715373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:36:37.251606  715373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:36:37.382675  715373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:36:37.410572  715373 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164 for IP: 192.168.85.2
	I1115 10:36:37.410595  715373 certs.go:195] generating shared ca certs ...
	I1115 10:36:37.410612  715373 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:37.410749  715373 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:36:37.410795  715373 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:36:37.410818  715373 certs.go:257] generating profile certs ...
	I1115 10:36:37.410874  715373 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.key
	I1115 10:36:37.410890  715373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt with IP's: []
	W1115 10:36:36.886206  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:38.888329  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:39.074156  715373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt ...
	I1115 10:36:39.074192  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: {Name:mke9b36e01aa6cd0d9145f828db40b3208979cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.074437  715373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.key ...
	I1115 10:36:39.074455  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.key: {Name:mkb2232a8d6420fddc4e3f1010a5e912748440f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.074564  715373 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336
	I1115 10:36:39.074585  715373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt.44e49336 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 10:36:39.239884  715373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt.44e49336 ...
	I1115 10:36:39.239913  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt.44e49336: {Name:mk54b60063234b6f4395363bb465a2f00ba81820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.240103  715373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336 ...
	I1115 10:36:39.240117  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336: {Name:mk125c5fe0265882b59c111dbe335b20a3b621cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.240205  715373 certs.go:382] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt.44e49336 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt
	I1115 10:36:39.240296  715373 certs.go:386] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key
	I1115 10:36:39.240385  715373 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key
	I1115 10:36:39.240419  715373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt with IP's: []
	I1115 10:36:39.470456  715373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt ...
	I1115 10:36:39.470490  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt: {Name:mkb6c6197b31aa8889bbe60f97da2fbe634c563a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.470691  715373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key ...
	I1115 10:36:39.470708  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key: {Name:mkd5d6d153af36887a360f58e3fc5e809bf7a416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:36:39.470900  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:36:39.470975  715373 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:36:39.470991  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:36:39.471024  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:36:39.471056  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:36:39.471086  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:36:39.471142  715373 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:36:39.471766  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:36:39.492036  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:36:39.513345  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:36:39.531337  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:36:39.549518  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:36:39.566675  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:36:39.583597  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:36:39.601108  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:36:39.619449  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:36:39.636572  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:36:39.654376  715373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:36:39.671393  715373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:36:39.684260  715373 ssh_runner.go:195] Run: openssl version
	I1115 10:36:39.690637  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:36:39.698637  715373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:36:39.702188  715373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:36:39.702306  715373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:36:39.743366  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:36:39.752436  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:36:39.760834  715373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:36:39.764549  715373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:36:39.764609  715373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:36:39.805702  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:36:39.814122  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:36:39.822082  715373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:39.826426  715373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:39.826486  715373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:36:39.870388  715373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:36:39.878839  715373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:36:39.885014  715373 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:36:39.885066  715373 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:36:39.885138  715373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:36:39.885199  715373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:36:39.915863  715373 cri.go:89] found id: ""
	I1115 10:36:39.915977  715373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:36:39.923790  715373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:36:39.931500  715373 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:36:39.931586  715373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:36:39.939248  715373 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:36:39.939282  715373 kubeadm.go:158] found existing configuration files:
	
	I1115 10:36:39.939353  715373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1115 10:36:39.947167  715373 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:36:39.947287  715373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:36:39.954636  715373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1115 10:36:39.962500  715373 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:36:39.962568  715373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:36:39.970142  715373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1115 10:36:39.978087  715373 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:36:39.978196  715373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:36:39.985402  715373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1115 10:36:39.993073  715373 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:36:39.993146  715373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:36:40.000555  715373 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:36:40.068409  715373 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:36:40.068805  715373 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:36:40.099619  715373 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:36:40.099709  715373 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:36:40.099766  715373 kubeadm.go:319] OS: Linux
	I1115 10:36:40.099829  715373 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:36:40.099902  715373 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:36:40.099966  715373 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:36:40.100034  715373 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:36:40.100103  715373 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:36:40.100169  715373 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:36:40.100267  715373 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:36:40.100333  715373 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:36:40.100406  715373 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:36:40.174031  715373 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:36:40.174153  715373 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:36:40.174256  715373 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:36:40.181918  715373 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:36:40.187264  715373 out.go:252]   - Generating certificates and keys ...
	I1115 10:36:40.187394  715373 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:36:40.187481  715373 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:36:41.659976  715373 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:36:42.201676  715373 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:36:43.251428  715373 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:36:43.669962  715373 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1115 10:36:41.396584  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	W1115 10:36:43.888344  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:45.172855  715373 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:36:45.173036  715373 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-303164 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:36:45.432834  715373 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:36:45.433145  715373 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-303164 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:36:45.557728  715373 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:36:45.892751  715373 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:36:45.977681  715373 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:36:45.977775  715373 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:36:46.135212  715373 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:36:46.577307  715373 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:36:47.282709  715373 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:36:47.916296  715373 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:36:48.038243  715373 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:36:48.039117  715373 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:36:48.042004  715373 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1115 10:36:46.390330  711801 pod_ready.go:104] pod "coredns-66bc5c9577-sl29r" is not "Ready", error: <nil>
	I1115 10:36:46.886761  711801 pod_ready.go:94] pod "coredns-66bc5c9577-sl29r" is "Ready"
	I1115 10:36:46.886785  711801 pod_ready.go:86] duration metric: took 41.505728819s for pod "coredns-66bc5c9577-sl29r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.889433  711801 pod_ready.go:83] waiting for pod "etcd-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.894042  711801 pod_ready.go:94] pod "etcd-embed-certs-531596" is "Ready"
	I1115 10:36:46.894067  711801 pod_ready.go:86] duration metric: took 4.581066ms for pod "etcd-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.896399  711801 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.901264  711801 pod_ready.go:94] pod "kube-apiserver-embed-certs-531596" is "Ready"
	I1115 10:36:46.901294  711801 pod_ready.go:86] duration metric: took 4.875794ms for pod "kube-apiserver-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:46.904008  711801 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:47.086303  711801 pod_ready.go:94] pod "kube-controller-manager-embed-certs-531596" is "Ready"
	I1115 10:36:47.086380  711801 pod_ready.go:86] duration metric: took 182.310729ms for pod "kube-controller-manager-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:47.286380  711801 pod_ready.go:83] waiting for pod "kube-proxy-nqfl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:47.686869  711801 pod_ready.go:94] pod "kube-proxy-nqfl8" is "Ready"
	I1115 10:36:47.686962  711801 pod_ready.go:86] duration metric: took 400.507227ms for pod "kube-proxy-nqfl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:47.886270  711801 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:48.286147  711801 pod_ready.go:94] pod "kube-scheduler-embed-certs-531596" is "Ready"
	I1115 10:36:48.286170  711801 pod_ready.go:86] duration metric: took 399.869411ms for pod "kube-scheduler-embed-certs-531596" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:36:48.286182  711801 pod_ready.go:40] duration metric: took 42.971788607s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:36:48.353259  711801 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:36:48.358693  711801 out.go:179] * Done! kubectl is now configured to use "embed-certs-531596" cluster and "default" namespace by default
	I1115 10:36:48.045439  715373 out.go:252]   - Booting up control plane ...
	I1115 10:36:48.045554  715373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:36:48.045663  715373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:36:48.045736  715373 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:36:48.074402  715373 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:36:48.074540  715373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:36:48.083300  715373 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:36:48.086929  715373 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:36:48.086984  715373 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:36:48.217010  715373 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:36:48.217135  715373 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:36:50.717495  715373 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.500787028s
	I1115 10:36:50.723516  715373 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:36:50.723616  715373 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1115 10:36:50.724018  715373 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:36:50.724114  715373 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:36:53.756222  715373 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.03224047s
	I1115 10:36:56.064181  715373 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.340629501s
	I1115 10:36:57.725867  715373 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002031848s
	I1115 10:36:57.749402  715373 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:36:57.764771  715373 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:36:57.777841  715373 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:36:57.779091  715373 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-303164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:36:57.794817  715373 kubeadm.go:319] [bootstrap-token] Using token: ws59ug.oi7p2vpap16xq9au
	I1115 10:36:57.797748  715373 out.go:252]   - Configuring RBAC rules ...
	I1115 10:36:57.797912  715373 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:36:57.802922  715373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:36:57.811261  715373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:36:57.819069  715373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:36:57.823389  715373 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:36:57.829926  715373 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:36:58.133725  715373 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:36:58.598884  715373 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:36:59.133512  715373 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:36:59.134888  715373 kubeadm.go:319] 
	I1115 10:36:59.134967  715373 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:36:59.134973  715373 kubeadm.go:319] 
	I1115 10:36:59.135070  715373 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:36:59.135076  715373 kubeadm.go:319] 
	I1115 10:36:59.135102  715373 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:36:59.135164  715373 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:36:59.135216  715373 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:36:59.135221  715373 kubeadm.go:319] 
	I1115 10:36:59.135277  715373 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:36:59.135282  715373 kubeadm.go:319] 
	I1115 10:36:59.135331  715373 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:36:59.135340  715373 kubeadm.go:319] 
	I1115 10:36:59.135394  715373 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:36:59.135473  715373 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:36:59.135544  715373 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:36:59.135548  715373 kubeadm.go:319] 
	I1115 10:36:59.135636  715373 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:36:59.135715  715373 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:36:59.135719  715373 kubeadm.go:319] 
	I1115 10:36:59.135806  715373 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token ws59ug.oi7p2vpap16xq9au \
	I1115 10:36:59.135914  715373 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 10:36:59.135935  715373 kubeadm.go:319] 	--control-plane 
	I1115 10:36:59.135940  715373 kubeadm.go:319] 
	I1115 10:36:59.136028  715373 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:36:59.136032  715373 kubeadm.go:319] 
	I1115 10:36:59.136117  715373 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token ws59ug.oi7p2vpap16xq9au \
	I1115 10:36:59.136224  715373 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 10:36:59.139532  715373 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:36:59.139770  715373 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:36:59.139884  715373 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:36:59.139903  715373 cni.go:84] Creating CNI manager for ""
	I1115 10:36:59.139911  715373 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:36:59.143049  715373 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:36:59.145853  715373 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:36:59.150025  715373 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:36:59.150047  715373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:36:59.166590  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:36:59.482091  715373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:36:59.482237  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:36:59.482333  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-303164 minikube.k8s.io/updated_at=2025_11_15T10_36_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=default-k8s-diff-port-303164 minikube.k8s.io/primary=true
	I1115 10:36:59.502802  715373 ops.go:34] apiserver oom_adj: -16
	I1115 10:36:59.639195  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:00.139658  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:00.639594  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:01.139771  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:01.640292  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:02.139733  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:02.639297  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:03.141790  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:03.640287  715373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:03.770203  715373 kubeadm.go:1114] duration metric: took 4.288011987s to wait for elevateKubeSystemPrivileges
	I1115 10:37:03.770231  715373 kubeadm.go:403] duration metric: took 23.88516845s to StartCluster
	I1115 10:37:03.770249  715373 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:03.770306  715373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:37:03.771860  715373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:03.772122  715373 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:37:03.772383  715373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:37:03.772706  715373 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:37:03.772786  715373 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-303164"
	I1115 10:37:03.772789  715373 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:03.772818  715373 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-303164"
	I1115 10:37:03.772842  715373 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:37:03.772947  715373 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-303164"
	I1115 10:37:03.772963  715373 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-303164"
	I1115 10:37:03.773302  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:37:03.773317  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:37:03.776390  715373 out.go:179] * Verifying Kubernetes components...
	I1115 10:37:03.779320  715373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:37:03.827592  715373 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-303164"
	I1115 10:37:03.827631  715373 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:37:03.828050  715373 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:37:03.831206  715373 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:37:03.837713  715373 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:37:03.837735  715373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:37:03.837796  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:37:03.857916  715373 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:37:03.857936  715373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:37:03.858006  715373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:37:03.881841  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:37:03.898293  715373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:37:04.213733  715373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:37:04.326524  715373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:37:04.441999  715373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:37:04.442126  715373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:37:05.784126  715373 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.341976389s)
	I1115 10:37:05.785272  715373 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-303164" to be "Ready" ...
	I1115 10:37:05.785484  715373 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.34345702s)
	I1115 10:37:05.785498  715373 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 10:37:05.784003  715373 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.457441818s)
	I1115 10:37:05.788932  715373 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.524728803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8a5137b2-7dcd-4403-b56a-30f57c338271 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.525642378Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5693ac81-93c6-4288-bd66-958fa68503cf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.525760413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.533819274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.533997443Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/618e3a8e331c3170a16c15f2c79a6760356c3d5ebb31fabcfcebe2927f67e8e1/merged/etc/passwd: no such file or directory"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.534026168Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/618e3a8e331c3170a16c15f2c79a6760356c3d5ebb31fabcfcebe2927f67e8e1/merged/etc/group: no such file or directory"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.534293878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.554656567Z" level=info msg="Created container de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558: kube-system/storage-provisioner/storage-provisioner" id=5693ac81-93c6-4288-bd66-958fa68503cf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.555573596Z" level=info msg="Starting container: de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558" id=dfec9ad3-82a3-48b1-9735-db7ec93b283f name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:36:35 embed-certs-531596 crio[644]: time="2025-11-15T10:36:35.558316511Z" level=info msg="Started container" PID=1628 containerID=de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558 description=kube-system/storage-provisioner/storage-provisioner id=dfec9ad3-82a3-48b1-9735-db7ec93b283f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ce71c4e76d114f0878f7b4ef2ba4fd843f8b9230da057699884db712db56c6a
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.228068676Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.246646783Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.247105117Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.247189546Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.253824667Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.254062256Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.254205055Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.257902937Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.258071983Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.258150101Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.261788999Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.261950611Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.262038969Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.273920804Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:36:45 embed-certs-531596 crio[644]: time="2025-11-15T10:36:45.27410274Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	de1e8a6e238b0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           31 seconds ago       Running             storage-provisioner         2                   8ce71c4e76d11       storage-provisioner                          kube-system
	edffc037317d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           33 seconds ago       Exited              dashboard-metrics-scraper   2                   eeb215851413b       dashboard-metrics-scraper-6ffb444bf9-tdd78   kubernetes-dashboard
	46685d2b8f351       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   aad02c1ad8739       kubernetes-dashboard-855c9754f9-57w6h        kubernetes-dashboard
	485037dfcd265       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   1a813df1d1200       coredns-66bc5c9577-sl29r                     kube-system
	614e61f01d657       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   84d97811f972f       busybox                                      default
	befdacbffc79f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   5fdbe0db09f02       kube-proxy-nqfl8                             kube-system
	7686787474fca       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   b43c68c790d64       kindnet-9pzmc                                kube-system
	8d6ee23472f50       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   8ce71c4e76d11       storage-provisioner                          kube-system
	6574dcaec8359       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   bdca84735e1fb       kube-apiserver-embed-certs-531596            kube-system
	fb296649ab4b3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   647c9b263ac44       kube-controller-manager-embed-certs-531596   kube-system
	f50de0346fbea       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7e1e908600d96       kube-scheduler-embed-certs-531596            kube-system
	8c893770fdb03       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4afb85aea0808       etcd-embed-certs-531596                      kube-system
	
	
	==> coredns [485037dfcd265044f912626ceea2d533281e3d74aeea571cd809b54553eccd15] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33230 - 530 "HINFO IN 4959120970874875679.3411899866601838677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00579494s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-531596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-531596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=embed-certs-531596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_34_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:34:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-531596
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:36:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:36:34 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:36:34 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:36:34 +0000   Sat, 15 Nov 2025 10:34:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:36:34 +0000   Sat, 15 Nov 2025 10:35:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-531596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                86513864-e880-4a89-8b90-c692d6bc7e85
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-sl29r                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-embed-certs-531596                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-9pzmc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m26s
	  kube-system                 kube-apiserver-embed-certs-531596             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-embed-certs-531596    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-nqfl8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-embed-certs-531596             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tdd78    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-57w6h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   Starting                 61s                    kube-proxy       
	  Normal   Starting                 2m39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node embed-certs-531596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node embed-certs-531596 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node embed-certs-531596 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m26s                  node-controller  Node embed-certs-531596 event: Registered Node embed-certs-531596 in Controller
	  Normal   NodeReady                104s                   kubelet          Node embed-certs-531596 status is now: NodeReady
	  Normal   Starting                 69s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)      kubelet          Node embed-certs-531596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)      kubelet          Node embed-certs-531596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)      kubelet          Node embed-certs-531596 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                    node-controller  Node embed-certs-531596 event: Registered Node embed-certs-531596 in Controller
	
	
	==> dmesg <==
	[Nov15 10:13] overlayfs: idmapped layers are currently not supported
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	[Nov15 10:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8c893770fdb03a4e37b1d08381d9addac2d7610c1a9489454b5c254477699b17] <==
	{"level":"warn","ts":"2025-11-15T10:36:00.750777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.784578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.831478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.870070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:00.938540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.042957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.080706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.126227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.150577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.202817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.239684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.283948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.327453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.378828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.426777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.465847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.479142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.516747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.548251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.589328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.620583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.679492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.708710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.776378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:01.847950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38436","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:07 up  5:19,  0 user,  load average: 4.79, 3.75, 3.06
	Linux embed-certs-531596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7686787474fca52c2819c5885171525469c5859b47781117bc24b263d240bda7] <==
	I1115 10:36:05.016170       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:36:05.016504       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:36:05.016626       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:36:05.016646       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:36:05.016668       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:36:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:36:05.224101       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:36:05.224133       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:36:05.224142       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:36:05.227835       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:36:35.224453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:36:35.224578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:36:35.228648       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:36:35.228647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:36:36.924567       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:36:36.924667       1 metrics.go:72] Registering metrics
	I1115 10:36:36.924776       1 controller.go:711] "Syncing nftables rules"
	I1115 10:36:45.226129       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:36:45.226591       1 main.go:301] handling current node
	I1115 10:36:55.229662       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:36:55.229698       1 main.go:301] handling current node
	I1115 10:37:05.232257       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:37:05.232288       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6574dcaec8359c575d60f7f9b4b4a31ffe5a8ffe0a63577c96e18b02396872f9] <==
	I1115 10:36:03.407061       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:36:03.407083       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:36:03.408972       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:36:03.414167       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:36:03.414277       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:36:03.414457       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:36:03.433683       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:36:03.456523       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:36:03.466520       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:36:03.466585       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:36:03.466662       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:36:03.466671       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:36:03.466678       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:36:03.466683       1 cache.go:39] Caches are synced for autoregister controller
	E1115 10:36:03.481338       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:36:03.838042       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:04.230485       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:36:04.320395       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:04.367752       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:04.399315       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:04.573446       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.142.33"}
	I1115 10:36:04.592276       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.250.191"}
	I1115 10:36:06.902060       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:06.951954       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:36:07.003184       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fb296649ab4b3d918ce7358336368732c13315596a50250e0c726940c17152bc] <==
	I1115 10:36:06.497443       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:36:06.504202       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:36:06.506092       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:36:06.507333       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:06.507355       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:36:06.507363       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:36:06.507475       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:36:06.507643       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:36:06.511911       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:36:06.515936       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:36:06.526609       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:36:06.531915       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:36:06.534383       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:36:06.538973       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:36:06.540129       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:36:06.542428       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:36:06.543474       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:36:06.546433       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:36:06.546484       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:36:06.546522       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:36:06.546572       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:36:06.546643       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:36:06.546691       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:36:06.555846       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:36:06.568106       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [befdacbffc79f18ce39527c245ffbfb64f06c3603bc06289cea4dadfac5cbe3c] <==
	I1115 10:36:05.435332       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:36:05.659928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:36:05.770209       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:36:05.770326       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:36:05.770441       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:36:05.830738       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:36:05.830804       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:36:05.841131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:36:05.841536       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:36:05.841864       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:05.845946       1 config.go:200] "Starting service config controller"
	I1115 10:36:05.845999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:36:05.846041       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:36:05.846068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:36:05.846129       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:36:05.846158       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:36:05.846812       1 config.go:309] "Starting node config controller"
	I1115 10:36:05.849257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:36:05.849340       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:36:05.946688       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:36:05.946731       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:36:05.946704       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f50de0346fbea41f639b33ec5f1eff63239807868eacf5aad15a6baeb58568df] <==
	I1115 10:36:03.359564       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:36:05.794498       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:36:05.794533       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:36:05.801872       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:36:05.802578       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:36:05.806020       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:36:05.802707       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:36:05.802968       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:05.806301       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:36:05.803077       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:05.806343       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:05.906495       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:36:05.906519       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:36:05.906541       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:36:07 embed-certs-531596 kubelet[770]: I1115 10:36:07.199290     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1ba49be7-6120-447b-a77f-a5167b5c87ad-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-tdd78\" (UID: \"1ba49be7-6120-447b-a77f-a5167b5c87ad\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78"
	Nov 15 10:36:07 embed-certs-531596 kubelet[770]: I1115 10:36:07.199314     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b33d94a0-d2c6-4220-b732-0427f005a96c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-57w6h\" (UID: \"b33d94a0-d2c6-4220-b732-0427f005a96c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-57w6h"
	Nov 15 10:36:07 embed-certs-531596 kubelet[770]: I1115 10:36:07.199334     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlpk\" (UniqueName: \"kubernetes.io/projected/b33d94a0-d2c6-4220-b732-0427f005a96c-kube-api-access-2rlpk\") pod \"kubernetes-dashboard-855c9754f9-57w6h\" (UID: \"b33d94a0-d2c6-4220-b732-0427f005a96c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-57w6h"
	Nov 15 10:36:08 embed-certs-531596 kubelet[770]: W1115 10:36:08.417799     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6743ffb16c2ea97a4d71c91376b18a707a8bab940ad3c518bc59c07af010f28c/crio-aad02c1ad8739cfd937bc41317299156a30b56b966500556949a0dd770c1c08f WatchSource:0}: Error finding container aad02c1ad8739cfd937bc41317299156a30b56b966500556949a0dd770c1c08f: Status 404 returned error can't find the container with id aad02c1ad8739cfd937bc41317299156a30b56b966500556949a0dd770c1c08f
	Nov 15 10:36:13 embed-certs-531596 kubelet[770]: I1115 10:36:13.447088     770 scope.go:117] "RemoveContainer" containerID="2ddbe08ebaee826748cbcd8b10579faa231491c63663966470dc7a9996a0815b"
	Nov 15 10:36:14 embed-certs-531596 kubelet[770]: I1115 10:36:14.453696     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:14 embed-certs-531596 kubelet[770]: E1115 10:36:14.453871     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:14 embed-certs-531596 kubelet[770]: I1115 10:36:14.456329     770 scope.go:117] "RemoveContainer" containerID="2ddbe08ebaee826748cbcd8b10579faa231491c63663966470dc7a9996a0815b"
	Nov 15 10:36:15 embed-certs-531596 kubelet[770]: I1115 10:36:15.457499     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:15 embed-certs-531596 kubelet[770]: E1115 10:36:15.458285     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:18 embed-certs-531596 kubelet[770]: I1115 10:36:18.345179     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:18 embed-certs-531596 kubelet[770]: E1115 10:36:18.345353     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:20 embed-certs-531596 kubelet[770]: I1115 10:36:20.500332     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-57w6h" podStartSLOduration=2.024294541 podStartE2EDuration="13.500304754s" podCreationTimestamp="2025-11-15 10:36:07 +0000 UTC" firstStartedPulling="2025-11-15 10:36:08.422349722 +0000 UTC m=+11.324115074" lastFinishedPulling="2025-11-15 10:36:19.898359935 +0000 UTC m=+22.800125287" observedRunningTime="2025-11-15 10:36:20.498575407 +0000 UTC m=+23.400340776" watchObservedRunningTime="2025-11-15 10:36:20.500304754 +0000 UTC m=+23.402070106"
	Nov 15 10:36:33 embed-certs-531596 kubelet[770]: I1115 10:36:33.310516     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:33 embed-certs-531596 kubelet[770]: I1115 10:36:33.514750     770 scope.go:117] "RemoveContainer" containerID="a04fb427b267b67551f6bc9fb9ffbe0856618027ecf4fa06a76a2d10d1623012"
	Nov 15 10:36:33 embed-certs-531596 kubelet[770]: I1115 10:36:33.516476     770 scope.go:117] "RemoveContainer" containerID="edffc037317d2387077ef0776d6226248d3d851d0340387c56497d383eb1b924"
	Nov 15 10:36:33 embed-certs-531596 kubelet[770]: E1115 10:36:33.517893     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:35 embed-certs-531596 kubelet[770]: I1115 10:36:35.522971     770 scope.go:117] "RemoveContainer" containerID="8d6ee23472f5075985606028a63174d0467fcc73a763b67418774407ccb028af"
	Nov 15 10:36:38 embed-certs-531596 kubelet[770]: I1115 10:36:38.345389     770 scope.go:117] "RemoveContainer" containerID="edffc037317d2387077ef0776d6226248d3d851d0340387c56497d383eb1b924"
	Nov 15 10:36:38 embed-certs-531596 kubelet[770]: E1115 10:36:38.345578     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:36:52 embed-certs-531596 kubelet[770]: I1115 10:36:52.309815     770 scope.go:117] "RemoveContainer" containerID="edffc037317d2387077ef0776d6226248d3d851d0340387c56497d383eb1b924"
	Nov 15 10:36:52 embed-certs-531596 kubelet[770]: E1115 10:36:52.310468     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tdd78_kubernetes-dashboard(1ba49be7-6120-447b-a77f-a5167b5c87ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tdd78" podUID="1ba49be7-6120-447b-a77f-a5167b5c87ad"
	Nov 15 10:37:00 embed-certs-531596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:37:01 embed-certs-531596 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:37:01 embed-certs-531596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [46685d2b8f35198cf7577a7610a7caf8d7d1fab9df4015f3cd9180ef344ac005] <==
	2025/11/15 10:36:19 Starting overwatch
	2025/11/15 10:36:19 Using namespace: kubernetes-dashboard
	2025/11/15 10:36:19 Using in-cluster config to connect to apiserver
	2025/11/15 10:36:19 Using secret token for csrf signing
	2025/11/15 10:36:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:36:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:36:20 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:36:20 Generating JWE encryption key
	2025/11/15 10:36:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:36:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:36:20 Initializing JWE encryption key from synchronized object
	2025/11/15 10:36:20 Creating in-cluster Sidecar client
	2025/11/15 10:36:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:36:20 Serving insecurely on HTTP port: 9090
	2025/11/15 10:36:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8d6ee23472f5075985606028a63174d0467fcc73a763b67418774407ccb028af] <==
	I1115 10:36:04.895274       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:36:34.897676       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [de1e8a6e238b030899b09f439b383f95060907fc3a63f53a38f57dbc855ad558] <==
	W1115 10:36:43.325297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:46.923770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:49.977696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:52.999803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:53.009521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:53.009709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:36:53.009914       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-531596_c3000c07-475f-4656-ba6e-526de0c452fd!
	I1115 10:36:53.010863       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"50582e2f-871b-4ae3-bc92-dc6483b1130c", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-531596_c3000c07-475f-4656-ba6e-526de0c452fd became leader
	W1115 10:36:53.024625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:53.040829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:36:53.110298       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-531596_c3000c07-475f-4656-ba6e-526de0c452fd!
	W1115 10:36:55.043590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:55.048444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:57.051717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:57.056584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:59.059837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:36:59.067253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:01.070509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:01.075893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:03.083092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:03.088472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:05.096062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:05.110457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:07.113907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:07.118390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-531596 -n embed-certs-531596
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-531596 -n embed-certs-531596: exit status 2 (373.010717ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-531596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-395885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-395885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (355.932472ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-395885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-395885
helpers_test.go:243: (dbg) docker inspect newest-cni-395885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618",
	        "Created": "2025-11-15T10:37:16.384426052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 719703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:37:16.445877539Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/hosts",
	        "LogPath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618-json.log",
	        "Name": "/newest-cni-395885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-395885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-395885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618",
	                "LowerDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-395885",
	                "Source": "/var/lib/docker/volumes/newest-cni-395885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-395885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-395885",
	                "name.minikube.sigs.k8s.io": "newest-cni-395885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed121655b0c98b91aacb4025440c107657e70acdf4d54fc77dfc49ffef2bcebc",
	            "SandboxKey": "/var/run/docker/netns/ed121655b0c9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-395885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:7d:7b:9f:ae:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c000d9cd848aa0e1eda0146b58174b6c18a724587543714ebd99f791f9b9348d",
	                    "EndpointID": "6975df73cd6806bf7bd8c29ece8dedfb17049c19d5473dbc6d6bfb4cce2c9f62",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-395885",
	                        "4aa47ed5c3a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-395885 -n newest-cni-395885
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-395885 logs -n 25
E1115 10:37:50.068610  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-395885 logs -n 25: (1.115316425s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-845026 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-845026       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ delete  │ -p old-k8s-version-448285                                                                                                                                                                                                                     │ old-k8s-version-448285       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:34 UTC │
	│ delete  │ -p cert-expiration-845026                                                                                                                                                                                                                     │ cert-expiration-845026       │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:33 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ stop    │ -p no-preload-907610 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p no-preload-907610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-531596 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-907610 image list --format=json                                                                                                                                                                                                    │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-167523                                                                                                                                                                                                               │ disable-driver-mounts-167523 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ embed-certs-531596 image list --format=json                                                                                                                                                                                                   │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-531596 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-395885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:37:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:37:10.903604  719318 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:37:10.903858  719318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:10.903883  719318 out.go:374] Setting ErrFile to fd 2...
	I1115 10:37:10.903903  719318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:10.904216  719318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:37:10.904685  719318 out.go:368] Setting JSON to false
	I1115 10:37:10.905732  719318 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19182,"bootTime":1763183849,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:37:10.905830  719318 start.go:143] virtualization:  
	I1115 10:37:10.910067  719318 out.go:179] * [newest-cni-395885] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:37:10.914629  719318 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:37:10.914823  719318 notify.go:221] Checking for updates...
	I1115 10:37:10.921205  719318 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:37:10.924545  719318 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:37:10.927729  719318 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:37:10.930817  719318 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:37:10.933834  719318 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:37:10.937500  719318 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:10.937646  719318 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:37:10.970139  719318 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:37:10.970268  719318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:37:11.043945  719318 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:37:11.033568389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:37:11.044054  719318 docker.go:319] overlay module found
	I1115 10:37:11.047355  719318 out.go:179] * Using the docker driver based on user configuration
	I1115 10:37:11.050329  719318 start.go:309] selected driver: docker
	I1115 10:37:11.050349  719318 start.go:930] validating driver "docker" against <nil>
	I1115 10:37:11.050364  719318 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:37:11.051151  719318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:37:11.110890  719318 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:37:11.101230984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:37:11.111061  719318 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1115 10:37:11.111098  719318 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1115 10:37:11.111349  719318 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:37:11.114323  719318 out.go:179] * Using Docker driver with root privileges
	I1115 10:37:11.117264  719318 cni.go:84] Creating CNI manager for ""
	I1115 10:37:11.117383  719318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:37:11.117432  719318 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:37:11.117556  719318 start.go:353] cluster config:
	{Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:37:11.121743  719318 out.go:179] * Starting "newest-cni-395885" primary control-plane node in "newest-cni-395885" cluster
	I1115 10:37:11.124794  719318 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:37:11.127808  719318 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:37:11.130783  719318 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:37:11.130851  719318 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:37:11.130863  719318 cache.go:65] Caching tarball of preloaded images
	I1115 10:37:11.130902  719318 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:37:11.130963  719318 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:37:11.130984  719318 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:37:11.131098  719318 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/config.json ...
	I1115 10:37:11.131118  719318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/config.json: {Name:mk1792a0212b63f24470efba5957b5d20e31c1b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:11.160330  719318 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:37:11.160351  719318 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:37:11.160372  719318 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:37:11.160397  719318 start.go:360] acquireMachinesLock for newest-cni-395885: {Name:mka4032c99bad1affc6ad41e6339261f7082d729 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:37:11.160524  719318 start.go:364] duration metric: took 110.348µs to acquireMachinesLock for "newest-cni-395885"
	I1115 10:37:11.160551  719318 start.go:93] Provisioning new machine with config: &{Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:37:11.160631  719318 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:37:10.288206  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	W1115 10:37:12.288518  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	I1115 10:37:11.164118  719318 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:37:11.164363  719318 start.go:159] libmachine.API.Create for "newest-cni-395885" (driver="docker")
	I1115 10:37:11.164408  719318 client.go:173] LocalClient.Create starting
	I1115 10:37:11.164484  719318 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem
	I1115 10:37:11.164526  719318 main.go:143] libmachine: Decoding PEM data...
	I1115 10:37:11.164543  719318 main.go:143] libmachine: Parsing certificate...
	I1115 10:37:11.164604  719318 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem
	I1115 10:37:11.164634  719318 main.go:143] libmachine: Decoding PEM data...
	I1115 10:37:11.164648  719318 main.go:143] libmachine: Parsing certificate...
	I1115 10:37:11.165019  719318 cli_runner.go:164] Run: docker network inspect newest-cni-395885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:37:11.181542  719318 cli_runner.go:211] docker network inspect newest-cni-395885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:37:11.181661  719318 network_create.go:284] running [docker network inspect newest-cni-395885] to gather additional debugging logs...
	I1115 10:37:11.181680  719318 cli_runner.go:164] Run: docker network inspect newest-cni-395885
	W1115 10:37:11.198734  719318 cli_runner.go:211] docker network inspect newest-cni-395885 returned with exit code 1
	I1115 10:37:11.198764  719318 network_create.go:287] error running [docker network inspect newest-cni-395885]: docker network inspect newest-cni-395885: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-395885 not found
	I1115 10:37:11.198780  719318 network_create.go:289] output of [docker network inspect newest-cni-395885]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-395885 not found
	
	** /stderr **
	I1115 10:37:11.198899  719318 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:37:11.222123  719318 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-03fcaf6cb6bf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:21:e0:cf:fc:c1} reservation:<nil>}
	I1115 10:37:11.222473  719318 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a5248bd30780 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:00:a1:23:de:dd} reservation:<nil>}
	I1115 10:37:11.222787  719318 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae071823fd3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b9:7d:07:12:bf} reservation:<nil>}
	I1115 10:37:11.223218  719318 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d86c0}
	I1115 10:37:11.223242  719318 network_create.go:124] attempt to create docker network newest-cni-395885 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 10:37:11.223308  719318 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-395885 newest-cni-395885
	I1115 10:37:11.291407  719318 network_create.go:108] docker network newest-cni-395885 192.168.76.0/24 created
	I1115 10:37:11.291441  719318 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-395885" container
	I1115 10:37:11.291526  719318 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:37:11.308681  719318 cli_runner.go:164] Run: docker volume create newest-cni-395885 --label name.minikube.sigs.k8s.io=newest-cni-395885 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:37:11.326313  719318 oci.go:103] Successfully created a docker volume newest-cni-395885
	I1115 10:37:11.326411  719318 cli_runner.go:164] Run: docker run --rm --name newest-cni-395885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-395885 --entrypoint /usr/bin/test -v newest-cni-395885:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:37:11.897487  719318 oci.go:107] Successfully prepared a docker volume newest-cni-395885
	I1115 10:37:11.897553  719318 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:37:11.897569  719318 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:37:11.897678  719318 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-395885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1115 10:37:14.291331  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	W1115 10:37:16.789001  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	I1115 10:37:16.315752  719318 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-395885:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.41803187s)
	I1115 10:37:16.315795  719318 kic.go:203] duration metric: took 4.418223225s to extract preloaded images to volume ...
	W1115 10:37:16.315932  719318 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:37:16.316042  719318 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:37:16.369298  719318 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-395885 --name newest-cni-395885 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-395885 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-395885 --network newest-cni-395885 --ip 192.168.76.2 --volume newest-cni-395885:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:37:16.666328  719318 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Running}}
	I1115 10:37:16.693751  719318 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:16.716177  719318 cli_runner.go:164] Run: docker exec newest-cni-395885 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:37:16.769723  719318 oci.go:144] the created container "newest-cni-395885" has a running status.
	I1115 10:37:16.769750  719318 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa...
	I1115 10:37:17.342542  719318 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:37:17.373939  719318 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:17.393827  719318 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:37:17.393851  719318 kic_runner.go:114] Args: [docker exec --privileged newest-cni-395885 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:37:17.435146  719318 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:17.452043  719318 machine.go:94] provisionDockerMachine start ...
	I1115 10:37:17.452135  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:17.470431  719318 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:17.470893  719318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 10:37:17.470906  719318 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:37:17.471680  719318 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:37:20.629016  719318 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-395885
	
	I1115 10:37:20.629083  719318 ubuntu.go:182] provisioning hostname "newest-cni-395885"
	I1115 10:37:20.629165  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:20.646994  719318 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:20.647320  719318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 10:37:20.647338  719318 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-395885 && echo "newest-cni-395885" | sudo tee /etc/hostname
	I1115 10:37:20.807545  719318 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-395885
	
	I1115 10:37:20.807620  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:20.828431  719318 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:20.828756  719318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 10:37:20.828779  719318 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-395885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-395885/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-395885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:37:20.981936  719318 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:37:20.981964  719318 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:37:20.981986  719318 ubuntu.go:190] setting up certificates
	I1115 10:37:20.981995  719318 provision.go:84] configureAuth start
	I1115 10:37:20.982062  719318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-395885
	I1115 10:37:20.999364  719318 provision.go:143] copyHostCerts
	I1115 10:37:20.999437  719318 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:37:20.999455  719318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:37:20.999537  719318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:37:20.999642  719318 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:37:20.999652  719318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:37:20.999682  719318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:37:20.999750  719318 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:37:20.999759  719318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:37:20.999790  719318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:37:20.999852  719318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.newest-cni-395885 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-395885]
	I1115 10:37:21.287343  719318 provision.go:177] copyRemoteCerts
	I1115 10:37:21.287408  719318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:37:21.287452  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:21.305281  719318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:21.413417  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:37:21.434160  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:37:21.451337  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:37:21.471030  719318 provision.go:87] duration metric: took 489.002208ms to configureAuth
	I1115 10:37:21.471058  719318 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:37:21.471264  719318 config.go:182] Loaded profile config "newest-cni-395885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:21.471377  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:21.488887  719318 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:21.489240  719318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33814 <nil> <nil>}
	I1115 10:37:21.489261  719318 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:37:21.760696  719318 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:37:21.760770  719318 machine.go:97] duration metric: took 4.308692014s to provisionDockerMachine
	I1115 10:37:21.760795  719318 client.go:176] duration metric: took 10.596374691s to LocalClient.Create
	I1115 10:37:21.760847  719318 start.go:167] duration metric: took 10.596485924s to libmachine.API.Create "newest-cni-395885"
	I1115 10:37:21.760873  719318 start.go:293] postStartSetup for "newest-cni-395885" (driver="docker")
	I1115 10:37:21.760902  719318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:37:21.761026  719318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:37:21.761104  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:21.778198  719318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:21.885656  719318 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:37:21.889002  719318 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:37:21.889030  719318 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:37:21.889041  719318 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:37:21.889091  719318 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:37:21.889181  719318 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:37:21.889290  719318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:37:21.896686  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:37:21.914279  719318 start.go:296] duration metric: took 153.372342ms for postStartSetup
	I1115 10:37:21.914662  719318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-395885
	I1115 10:37:21.931031  719318 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/config.json ...
	I1115 10:37:21.931325  719318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:37:21.931377  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:21.947619  719318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:22.055343  719318 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:37:22.060623  719318 start.go:128] duration metric: took 10.899974444s to createHost
	I1115 10:37:22.060646  719318 start.go:83] releasing machines lock for "newest-cni-395885", held for 10.900111769s
	I1115 10:37:22.060726  719318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-395885
	I1115 10:37:22.078685  719318 ssh_runner.go:195] Run: cat /version.json
	I1115 10:37:22.078744  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:22.078772  719318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:37:22.078826  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:22.100064  719318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:22.101340  719318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:22.294109  719318 ssh_runner.go:195] Run: systemctl --version
	I1115 10:37:22.300516  719318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:37:22.336487  719318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:37:22.341063  719318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:37:22.341142  719318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:37:22.372787  719318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:37:22.372814  719318 start.go:496] detecting cgroup driver to use...
	I1115 10:37:22.372849  719318 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:37:22.372909  719318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:37:22.392075  719318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:37:22.405075  719318 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:37:22.405166  719318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:37:22.424422  719318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:37:22.443606  719318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:37:22.577257  719318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:37:22.695038  719318 docker.go:234] disabling docker service ...
	I1115 10:37:22.695108  719318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:37:22.717303  719318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:37:22.731541  719318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:37:22.848975  719318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:37:22.970284  719318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:37:22.983573  719318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:37:22.997946  719318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:37:22.998033  719318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:37:23.008929  719318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:37:23.009002  719318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:37:23.017919  719318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:37:23.027275  719318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:37:23.035989  719318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:37:23.044369  719318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:37:23.053656  719318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:37:23.068291  719318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:37:23.077399  719318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:37:23.084777  719318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:37:23.092138  719318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:37:23.224174  719318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:37:23.363528  719318 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:37:23.363678  719318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:37:23.368052  719318 start.go:564] Will wait 60s for crictl version
	I1115 10:37:23.368163  719318 ssh_runner.go:195] Run: which crictl
	I1115 10:37:23.371681  719318 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:37:23.398728  719318 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:37:23.398822  719318 ssh_runner.go:195] Run: crio --version
	I1115 10:37:23.428775  719318 ssh_runner.go:195] Run: crio --version
	I1115 10:37:23.464482  719318 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:37:23.467400  719318 cli_runner.go:164] Run: docker network inspect newest-cni-395885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:37:23.483687  719318 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:37:23.487609  719318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:37:23.500093  719318 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1115 10:37:19.288373  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	W1115 10:37:21.289281  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	W1115 10:37:23.789020  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	I1115 10:37:23.502901  719318 kubeadm.go:884] updating cluster {Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:37:23.503057  719318 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:37:23.503142  719318 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:37:23.534696  719318 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:37:23.534720  719318 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:37:23.534813  719318 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:37:23.559861  719318 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:37:23.559882  719318 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:37:23.559890  719318 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:37:23.560027  719318 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-395885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:37:23.560127  719318 ssh_runner.go:195] Run: crio config
	I1115 10:37:23.617740  719318 cni.go:84] Creating CNI manager for ""
	I1115 10:37:23.617769  719318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:37:23.617786  719318 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:37:23.617830  719318 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-395885 NodeName:newest-cni-395885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:37:23.617992  719318 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-395885"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:37:23.618083  719318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:37:23.626100  719318 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:37:23.626169  719318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:37:23.633623  719318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 10:37:23.646365  719318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:37:23.658813  719318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1115 10:37:23.672526  719318 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:37:23.676302  719318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:37:23.691576  719318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:37:23.811410  719318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:37:23.827497  719318 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885 for IP: 192.168.76.2
	I1115 10:37:23.827516  719318 certs.go:195] generating shared ca certs ...
	I1115 10:37:23.827533  719318 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:23.827664  719318 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:37:23.827707  719318 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:37:23.827721  719318 certs.go:257] generating profile certs ...
	I1115 10:37:23.827774  719318 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/client.key
	I1115 10:37:23.827784  719318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/client.crt with IP's: []
	I1115 10:37:24.223703  719318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/client.crt ...
	I1115 10:37:24.223735  719318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/client.crt: {Name:mk3c4aef471ed508d465c9837f4d3d21b24eb324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:24.223925  719318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/client.key ...
	I1115 10:37:24.223940  719318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/client.key: {Name:mk3945b6b1e1dfc86ae23f636c3276c796d3582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:24.224022  719318 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.key.128d9837
	I1115 10:37:24.224043  719318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.crt.128d9837 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 10:37:24.552066  719318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.crt.128d9837 ...
	I1115 10:37:24.552097  719318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.crt.128d9837: {Name:mk346fc0b2eb7398cc8cad5ae32f46e8dde8f118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:24.552288  719318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.key.128d9837 ...
	I1115 10:37:24.552303  719318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.key.128d9837: {Name:mk2a9363c2ca0b6986de7e47d298c86d472827c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:24.552393  719318 certs.go:382] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.crt.128d9837 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.crt
	I1115 10:37:24.552475  719318 certs.go:386] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.key.128d9837 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.key
	I1115 10:37:24.552539  719318 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.key
	I1115 10:37:24.552559  719318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.crt with IP's: []
	I1115 10:37:25.368925  719318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.crt ...
	I1115 10:37:25.368955  719318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.crt: {Name:mk23c59c92f0f709b0dee1a4595281da52470e65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:25.369124  719318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.key ...
	I1115 10:37:25.369147  719318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.key: {Name:mk0a52dc8eaa1f77a367c2147caf39d1beb1b4b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:25.369321  719318 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:37:25.369365  719318 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:37:25.369379  719318 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:37:25.369404  719318 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:37:25.369430  719318 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:37:25.369456  719318 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:37:25.369507  719318 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:37:25.370111  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:37:25.387985  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:37:25.407239  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:37:25.424486  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:37:25.442096  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:37:25.465197  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:37:25.482761  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:37:25.500889  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:37:25.519277  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:37:25.536228  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:37:25.553504  719318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:37:25.570983  719318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:37:25.584066  719318 ssh_runner.go:195] Run: openssl version
	I1115 10:37:25.590461  719318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:37:25.598908  719318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:37:25.602714  719318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:37:25.602774  719318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:37:25.643180  719318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:37:25.651463  719318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:37:25.660572  719318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:37:25.664069  719318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:37:25.664155  719318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:37:25.709924  719318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:37:25.718253  719318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:37:25.726643  719318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:37:25.730633  719318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:37:25.730736  719318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:37:25.771668  719318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:37:25.780018  719318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:37:25.783558  719318 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:37:25.783615  719318 kubeadm.go:401] StartCluster: {Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:37:25.783698  719318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:37:25.783756  719318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:37:25.816266  719318 cri.go:89] found id: ""
	I1115 10:37:25.816417  719318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:37:25.825044  719318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:37:25.832717  719318 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:37:25.832837  719318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:37:25.841244  719318 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:37:25.841314  719318 kubeadm.go:158] found existing configuration files:
	
	I1115 10:37:25.841374  719318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:37:25.849136  719318 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:37:25.849200  719318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:37:25.856633  719318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:37:25.864434  719318 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:37:25.864540  719318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:37:25.872033  719318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:37:25.879779  719318 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:37:25.879890  719318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:37:25.887715  719318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:37:25.895251  719318 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:37:25.895317  719318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:37:25.902944  719318 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:37:25.945063  719318 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:37:25.945388  719318 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:37:25.974705  719318 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:37:25.974952  719318 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:37:25.974998  719318 kubeadm.go:319] OS: Linux
	I1115 10:37:25.975052  719318 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:37:25.975108  719318 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:37:25.975164  719318 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:37:25.975217  719318 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:37:25.975272  719318 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:37:25.975333  719318 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:37:25.975384  719318 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:37:25.975439  719318 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:37:25.975491  719318 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:37:26.057526  719318 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:37:26.057665  719318 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:37:26.057769  719318 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:37:26.066694  719318 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1115 10:37:26.289279  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	W1115 10:37:28.289644  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	I1115 10:37:26.069966  719318 out.go:252]   - Generating certificates and keys ...
	I1115 10:37:26.070086  719318 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:37:26.070175  719318 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:37:26.354563  719318 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:37:26.576221  719318 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:37:27.424375  719318 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:37:28.040703  719318 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:37:28.241035  719318 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:37:28.241427  719318 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-395885] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:37:28.811827  719318 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:37:28.812193  719318 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-395885] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:37:29.454871  719318 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:37:30.371887  719318 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:37:30.751231  719318 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:37:30.751506  719318 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:37:31.476664  719318 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:37:32.252282  719318 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:37:32.744830  719318 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:37:33.061569  719318 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:37:33.443912  719318 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:37:33.444710  719318 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:37:33.448458  719318 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1115 10:37:30.788411  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	W1115 10:37:33.288680  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	I1115 10:37:33.451996  719318 out.go:252]   - Booting up control plane ...
	I1115 10:37:33.452107  719318 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:37:33.452188  719318 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:37:33.452827  719318 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:37:33.468536  719318 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:37:33.468936  719318 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:37:33.477875  719318 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:37:33.478236  719318 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:37:33.478285  719318 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:37:33.635879  719318 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:37:33.636004  719318 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:37:34.637126  719318 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001628355s
	I1115 10:37:34.640378  719318 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:37:34.640476  719318 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 10:37:34.640779  719318 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:37:34.640869  719318 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1115 10:37:35.789000  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	W1115 10:37:37.789344  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	I1115 10:37:38.500612  719318 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.859697476s
	I1115 10:37:40.118600  719318 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.47812272s
	I1115 10:37:41.142545  719318 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501809031s
	I1115 10:37:41.169334  719318 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:37:41.189158  719318 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:37:41.204508  719318 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:37:41.204735  719318 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-395885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:37:41.223531  719318 kubeadm.go:319] [bootstrap-token] Using token: cxuzbh.cvqvzypqa1u5bkf7
	I1115 10:37:41.226470  719318 out.go:252]   - Configuring RBAC rules ...
	I1115 10:37:41.226607  719318 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:37:41.231521  719318 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:37:41.239953  719318 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:37:41.246476  719318 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:37:41.251090  719318 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:37:41.255172  719318 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:37:41.550599  719318 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:37:41.992431  719318 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:37:42.550165  719318 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:37:42.551412  719318 kubeadm.go:319] 
	I1115 10:37:42.551491  719318 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:37:42.551505  719318 kubeadm.go:319] 
	I1115 10:37:42.551587  719318 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:37:42.551597  719318 kubeadm.go:319] 
	I1115 10:37:42.551624  719318 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:37:42.551688  719318 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:37:42.551745  719318 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:37:42.551754  719318 kubeadm.go:319] 
	I1115 10:37:42.551811  719318 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:37:42.551819  719318 kubeadm.go:319] 
	I1115 10:37:42.551869  719318 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:37:42.551876  719318 kubeadm.go:319] 
	I1115 10:37:42.551931  719318 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:37:42.552012  719318 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:37:42.552088  719318 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:37:42.552096  719318 kubeadm.go:319] 
	I1115 10:37:42.552184  719318 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:37:42.552274  719318 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:37:42.552283  719318 kubeadm.go:319] 
	I1115 10:37:42.552370  719318 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cxuzbh.cvqvzypqa1u5bkf7 \
	I1115 10:37:42.552481  719318 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 10:37:42.552507  719318 kubeadm.go:319] 	--control-plane 
	I1115 10:37:42.552517  719318 kubeadm.go:319] 
	I1115 10:37:42.552606  719318 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:37:42.552621  719318 kubeadm.go:319] 
	I1115 10:37:42.552707  719318 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cxuzbh.cvqvzypqa1u5bkf7 \
	I1115 10:37:42.552822  719318 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 10:37:42.557858  719318 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:37:42.558111  719318 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:37:42.558224  719318 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:37:42.558244  719318 cni.go:84] Creating CNI manager for ""
	I1115 10:37:42.558251  719318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:37:42.562651  719318 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1115 10:37:40.288476  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	W1115 10:37:42.289029  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	I1115 10:37:42.565436  719318 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:37:42.569508  719318 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:37:42.569530  719318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:37:42.582960  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:37:42.904004  719318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:37:42.904087  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:42.904155  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-395885 minikube.k8s.io/updated_at=2025_11_15T10_37_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=newest-cni-395885 minikube.k8s.io/primary=true
	I1115 10:37:43.108003  719318 ops.go:34] apiserver oom_adj: -16
	I1115 10:37:43.108138  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:43.609160  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:44.108455  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:44.608337  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:45.109300  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:45.608400  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:46.108744  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:46.608660  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:47.108294  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:47.608842  719318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:37:47.705635  719318 kubeadm.go:1114] duration metric: took 4.801604115s to wait for elevateKubeSystemPrivileges
	I1115 10:37:47.705665  719318 kubeadm.go:403] duration metric: took 21.922055448s to StartCluster
	I1115 10:37:47.705684  719318 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:47.705745  719318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:37:47.706856  719318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:37:47.707142  719318 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:37:47.707267  719318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:37:47.707562  719318 config.go:182] Loaded profile config "newest-cni-395885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:47.707608  719318 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:37:47.707669  719318 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-395885"
	I1115 10:37:47.707683  719318 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-395885"
	I1115 10:37:47.707705  719318 host.go:66] Checking if "newest-cni-395885" exists ...
	I1115 10:37:47.708398  719318 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:47.708810  719318 addons.go:70] Setting default-storageclass=true in profile "newest-cni-395885"
	I1115 10:37:47.708834  719318 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-395885"
	I1115 10:37:47.709136  719318 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:47.711721  719318 out.go:179] * Verifying Kubernetes components...
	I1115 10:37:47.719395  719318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:37:47.755103  719318 addons.go:239] Setting addon default-storageclass=true in "newest-cni-395885"
	I1115 10:37:47.755142  719318 host.go:66] Checking if "newest-cni-395885" exists ...
	I1115 10:37:47.755549  719318 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:47.760520  719318 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1115 10:37:44.789775  715373 node_ready.go:57] node "default-k8s-diff-port-303164" has "Ready":"False" status (will retry)
	I1115 10:37:46.290243  715373 node_ready.go:49] node "default-k8s-diff-port-303164" is "Ready"
	I1115 10:37:46.290269  715373 node_ready.go:38] duration metric: took 40.504979689s for node "default-k8s-diff-port-303164" to be "Ready" ...
	I1115 10:37:46.290282  715373 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:37:46.290343  715373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:37:46.311563  715373 api_server.go:72] duration metric: took 42.539387395s to wait for apiserver process to appear ...
	I1115 10:37:46.311586  715373 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:37:46.311605  715373 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:37:46.324403  715373 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:37:46.326173  715373 api_server.go:141] control plane version: v1.34.1
	I1115 10:37:46.326193  715373 api_server.go:131] duration metric: took 14.600339ms to wait for apiserver health ...
	I1115 10:37:46.326203  715373 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:37:46.346994  715373 system_pods.go:59] 8 kube-system pods found
	I1115 10:37:46.347027  715373 system_pods.go:61] "coredns-66bc5c9577-97gv6" [b6f9a65e-75c6-4783-a879-1dfc86407862] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:37:46.347036  715373 system_pods.go:61] "etcd-default-k8s-diff-port-303164" [4eb09433-dbaa-4753-aad2-8452321e45a8] Running
	I1115 10:37:46.347042  715373 system_pods.go:61] "kindnet-rph85" [30ef2b98-29f3-4a7e-a041-5a6bd98c92ef] Running
	I1115 10:37:46.347047  715373 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-303164" [04835349-0a82-4a74-9ed1-9032f3bfabef] Running
	I1115 10:37:46.347051  715373 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-303164" [cfdb7882-766a-463b-a480-f6ee60cb718f] Running
	I1115 10:37:46.347058  715373 system_pods.go:61] "kube-proxy-vmnnc" [e61077d0-3c58-4094-ad7e-436ec2f7fb3f] Running
	I1115 10:37:46.347063  715373 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-303164" [8c9a46a5-0f1d-496c-8b18-40544a608356] Running
	I1115 10:37:46.347069  715373 system_pods.go:61] "storage-provisioner" [344be432-6b85-4dea-a1a0-54ce0079d253] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:37:46.347075  715373 system_pods.go:74] duration metric: took 20.866543ms to wait for pod list to return data ...
	I1115 10:37:46.347083  715373 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:37:46.355248  715373 default_sa.go:45] found service account: "default"
	I1115 10:37:46.355269  715373 default_sa.go:55] duration metric: took 8.18122ms for default service account to be created ...
	I1115 10:37:46.355279  715373 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:37:46.436211  715373 system_pods.go:86] 8 kube-system pods found
	I1115 10:37:46.436306  715373 system_pods.go:89] "coredns-66bc5c9577-97gv6" [b6f9a65e-75c6-4783-a879-1dfc86407862] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:37:46.436328  715373 system_pods.go:89] "etcd-default-k8s-diff-port-303164" [4eb09433-dbaa-4753-aad2-8452321e45a8] Running
	I1115 10:37:46.436371  715373 system_pods.go:89] "kindnet-rph85" [30ef2b98-29f3-4a7e-a041-5a6bd98c92ef] Running
	I1115 10:37:46.436398  715373 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303164" [04835349-0a82-4a74-9ed1-9032f3bfabef] Running
	I1115 10:37:46.436426  715373 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303164" [cfdb7882-766a-463b-a480-f6ee60cb718f] Running
	I1115 10:37:46.436467  715373 system_pods.go:89] "kube-proxy-vmnnc" [e61077d0-3c58-4094-ad7e-436ec2f7fb3f] Running
	I1115 10:37:46.436494  715373 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303164" [8c9a46a5-0f1d-496c-8b18-40544a608356] Running
	I1115 10:37:46.436518  715373 system_pods.go:89] "storage-provisioner" [344be432-6b85-4dea-a1a0-54ce0079d253] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:37:46.436569  715373 retry.go:31] will retry after 270.242248ms: missing components: kube-dns
	I1115 10:37:46.711228  715373 system_pods.go:86] 8 kube-system pods found
	I1115 10:37:46.711266  715373 system_pods.go:89] "coredns-66bc5c9577-97gv6" [b6f9a65e-75c6-4783-a879-1dfc86407862] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:37:46.711273  715373 system_pods.go:89] "etcd-default-k8s-diff-port-303164" [4eb09433-dbaa-4753-aad2-8452321e45a8] Running
	I1115 10:37:46.711280  715373 system_pods.go:89] "kindnet-rph85" [30ef2b98-29f3-4a7e-a041-5a6bd98c92ef] Running
	I1115 10:37:46.711284  715373 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303164" [04835349-0a82-4a74-9ed1-9032f3bfabef] Running
	I1115 10:37:46.711289  715373 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303164" [cfdb7882-766a-463b-a480-f6ee60cb718f] Running
	I1115 10:37:46.711292  715373 system_pods.go:89] "kube-proxy-vmnnc" [e61077d0-3c58-4094-ad7e-436ec2f7fb3f] Running
	I1115 10:37:46.711298  715373 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303164" [8c9a46a5-0f1d-496c-8b18-40544a608356] Running
	I1115 10:37:46.711304  715373 system_pods.go:89] "storage-provisioner" [344be432-6b85-4dea-a1a0-54ce0079d253] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:37:46.711318  715373 retry.go:31] will retry after 347.71497ms: missing components: kube-dns
	I1115 10:37:47.062760  715373 system_pods.go:86] 8 kube-system pods found
	I1115 10:37:47.062795  715373 system_pods.go:89] "coredns-66bc5c9577-97gv6" [b6f9a65e-75c6-4783-a879-1dfc86407862] Running
	I1115 10:37:47.062804  715373 system_pods.go:89] "etcd-default-k8s-diff-port-303164" [4eb09433-dbaa-4753-aad2-8452321e45a8] Running
	I1115 10:37:47.062810  715373 system_pods.go:89] "kindnet-rph85" [30ef2b98-29f3-4a7e-a041-5a6bd98c92ef] Running
	I1115 10:37:47.062814  715373 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303164" [04835349-0a82-4a74-9ed1-9032f3bfabef] Running
	I1115 10:37:47.062819  715373 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303164" [cfdb7882-766a-463b-a480-f6ee60cb718f] Running
	I1115 10:37:47.062823  715373 system_pods.go:89] "kube-proxy-vmnnc" [e61077d0-3c58-4094-ad7e-436ec2f7fb3f] Running
	I1115 10:37:47.062827  715373 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303164" [8c9a46a5-0f1d-496c-8b18-40544a608356] Running
	I1115 10:37:47.062831  715373 system_pods.go:89] "storage-provisioner" [344be432-6b85-4dea-a1a0-54ce0079d253] Running
	I1115 10:37:47.062839  715373 system_pods.go:126] duration metric: took 707.553706ms to wait for k8s-apps to be running ...
	I1115 10:37:47.062851  715373 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:37:47.062908  715373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:37:47.077755  715373 system_svc.go:56] duration metric: took 14.894025ms WaitForService to wait for kubelet
	I1115 10:37:47.077783  715373 kubeadm.go:587] duration metric: took 43.305612701s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:37:47.077802  715373 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:37:47.081124  715373 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:37:47.081156  715373 node_conditions.go:123] node cpu capacity is 2
	I1115 10:37:47.081183  715373 node_conditions.go:105] duration metric: took 3.376119ms to run NodePressure ...
	I1115 10:37:47.081197  715373 start.go:242] waiting for startup goroutines ...
	I1115 10:37:47.081208  715373 start.go:247] waiting for cluster config update ...
	I1115 10:37:47.081222  715373 start.go:256] writing updated cluster config ...
	I1115 10:37:47.081507  715373 ssh_runner.go:195] Run: rm -f paused
	I1115 10:37:47.085279  715373 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:37:47.089970  715373 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-97gv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:47.097958  715373 pod_ready.go:94] pod "coredns-66bc5c9577-97gv6" is "Ready"
	I1115 10:37:47.098028  715373 pod_ready.go:86] duration metric: took 8.033843ms for pod "coredns-66bc5c9577-97gv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:47.100687  715373 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:47.105748  715373 pod_ready.go:94] pod "etcd-default-k8s-diff-port-303164" is "Ready"
	I1115 10:37:47.105772  715373 pod_ready.go:86] duration metric: took 5.029184ms for pod "etcd-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:47.108098  715373 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:47.118040  715373 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-303164" is "Ready"
	I1115 10:37:47.118118  715373 pod_ready.go:86] duration metric: took 9.964079ms for pod "kube-apiserver-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:47.120837  715373 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:47.488646  715373 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-303164" is "Ready"
	I1115 10:37:47.488723  715373 pod_ready.go:86] duration metric: took 367.816424ms for pod "kube-controller-manager-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:47.690758  715373 pod_ready.go:83] waiting for pod "kube-proxy-vmnnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:48.090668  715373 pod_ready.go:94] pod "kube-proxy-vmnnc" is "Ready"
	I1115 10:37:48.090699  715373 pod_ready.go:86] duration metric: took 399.868673ms for pod "kube-proxy-vmnnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:48.291457  715373 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:48.690001  715373 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-303164" is "Ready"
	I1115 10:37:48.690029  715373 pod_ready.go:86] duration metric: took 398.542121ms for pod "kube-scheduler-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:37:48.690042  715373 pod_ready.go:40] duration metric: took 1.604732047s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:37:48.806669  715373 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:37:48.809777  715373 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-303164" cluster and "default" namespace by default
	I1115 10:37:47.763315  719318 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:37:47.763337  719318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:37:47.763400  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:47.791640  719318 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:37:47.791668  719318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:37:47.791730  719318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:47.810617  719318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:47.833838  719318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33814 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:48.017525  719318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:37:48.017660  719318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:37:48.031665  719318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:37:48.129408  719318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:37:48.441923  719318 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:37:48.441991  719318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:37:48.442079  719318 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1115 10:37:48.822914  719318 api_server.go:72] duration metric: took 1.115727509s to wait for apiserver process to appear ...
	I1115 10:37:48.822935  719318 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:37:48.822951  719318 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:37:48.897932  719318 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:37:48.900572  719318 api_server.go:141] control plane version: v1.34.1
	I1115 10:37:48.900607  719318 api_server.go:131] duration metric: took 77.665236ms to wait for apiserver health ...
	I1115 10:37:48.900619  719318 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:37:48.903949  719318 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:37:48.904200  719318 system_pods.go:59] 8 kube-system pods found
	I1115 10:37:48.904232  719318 system_pods.go:61] "coredns-66bc5c9577-mg7hm" [a7b030f3-a8f3-4baf-ba33-cfa56768bc15] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:37:48.904248  719318 system_pods.go:61] "etcd-newest-cni-395885" [9ebe1396-d892-4a22-a83c-01ae69b07011] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:37:48.904267  719318 system_pods.go:61] "kindnet-bqt7r" [d9452c06-f40d-4e91-be67-17e243f8edd9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 10:37:48.904277  719318 system_pods.go:61] "kube-apiserver-newest-cni-395885" [762efbd5-c6b4-4c20-9e93-8f9a68fe2b8c] Running
	I1115 10:37:48.904285  719318 system_pods.go:61] "kube-controller-manager-newest-cni-395885" [ed67cb82-b823-42a3-8afa-f2e050c12292] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:37:48.904298  719318 system_pods.go:61] "kube-proxy-t26c4" [17d73502-7107-4d36-8af0-187ea6985a47] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:37:48.904303  719318 system_pods.go:61] "kube-scheduler-newest-cni-395885" [216e9e79-e3fe-485d-aaac-03d61448730a] Running
	I1115 10:37:48.904309  719318 system_pods.go:61] "storage-provisioner" [fee63e42-26d6-4e9d-b080-c433640e6144] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:37:48.904322  719318 system_pods.go:74] duration metric: took 3.698227ms to wait for pod list to return data ...
	I1115 10:37:48.904332  719318 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:37:48.907154  719318 default_sa.go:45] found service account: "default"
	I1115 10:37:48.907182  719318 default_sa.go:55] duration metric: took 2.839158ms for default service account to be created ...
	I1115 10:37:48.907194  719318 kubeadm.go:587] duration metric: took 1.200010972s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:37:48.907218  719318 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:37:48.907737  719318 addons.go:515] duration metric: took 1.200126283s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:37:48.914275  719318 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:37:48.914305  719318 node_conditions.go:123] node cpu capacity is 2
	I1115 10:37:48.914319  719318 node_conditions.go:105] duration metric: took 7.095473ms to run NodePressure ...
	I1115 10:37:48.914332  719318 start.go:242] waiting for startup goroutines ...
	I1115 10:37:48.953331  719318 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-395885" context rescaled to 1 replicas
	I1115 10:37:48.953368  719318 start.go:247] waiting for cluster config update ...
	I1115 10:37:48.953381  719318 start.go:256] writing updated cluster config ...
	I1115 10:37:48.953747  719318 ssh_runner.go:195] Run: rm -f paused
	I1115 10:37:49.064990  719318 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:37:49.070486  719318 out.go:179] * Done! kubectl is now configured to use "newest-cni-395885" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.634577586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.638262283Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0f25acbc-beba-4be8-a329-90791ef16291 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.645148523Z" level=info msg="Ran pod sandbox 5fb559bf76c6ce30ca70d0743ea73cb26c4edb39d3bccffe422ca418660664d3 with infra container: kube-system/kube-proxy-t26c4/POD" id=0f25acbc-beba-4be8-a329-90791ef16291 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.64877961Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=87e8a27d-2cc9-482e-a8d7-a772003a8994 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.651203985Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=56540460-448d-48a9-ac8c-afcda49a2c22 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.662823369Z" level=info msg="Creating container: kube-system/kube-proxy-t26c4/kube-proxy" id=c4d6d954-e347-4869-bb3d-40d8dd47cae8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.66311164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.671672147Z" level=info msg="Running pod sandbox: kube-system/kindnet-bqt7r/POD" id=ffd16f0b-c2e0-4909-91e4-4e1b7cec9e81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.671918943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.673496646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.674941734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.678709973Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ffd16f0b-c2e0-4909-91e4-4e1b7cec9e81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.697415198Z" level=info msg="Ran pod sandbox 714395881de768828624bd34df8375629a192b3196b143480fadd4c43c7abe7c with infra container: kube-system/kindnet-bqt7r/POD" id=ffd16f0b-c2e0-4909-91e4-4e1b7cec9e81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.700792794Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7b5a9069-2e21-490e-992d-b68c250712d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.702254464Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=38eecf9c-b88d-4a22-b895-aa50689cd2bf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.710998901Z" level=info msg="Creating container: kube-system/kindnet-bqt7r/kindnet-cni" id=df54f06f-fa80-4fe0-a3f1-f08ee23eb0a4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.711387657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.718817792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.719509129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.741495789Z" level=info msg="Created container d28679e7b27d1b954a06b1fcd52d106f2faa137deb2dee7ecfa5228abeca6b12: kube-system/kindnet-bqt7r/kindnet-cni" id=df54f06f-fa80-4fe0-a3f1-f08ee23eb0a4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.742700325Z" level=info msg="Starting container: d28679e7b27d1b954a06b1fcd52d106f2faa137deb2dee7ecfa5228abeca6b12" id=171daba9-63f0-4465-af6e-5f8eb19852b9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.747243952Z" level=info msg="Created container c4fcfd2818ff902a817213191e2c28c317655f510136fd356485e1c085098ed1: kube-system/kube-proxy-t26c4/kube-proxy" id=c4d6d954-e347-4869-bb3d-40d8dd47cae8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.747976297Z" level=info msg="Starting container: c4fcfd2818ff902a817213191e2c28c317655f510136fd356485e1c085098ed1" id=2eb46f38-92f1-4d59-9a8e-d0af0a23d930 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.748541549Z" level=info msg="Started container" PID=1501 containerID=d28679e7b27d1b954a06b1fcd52d106f2faa137deb2dee7ecfa5228abeca6b12 description=kube-system/kindnet-bqt7r/kindnet-cni id=171daba9-63f0-4465-af6e-5f8eb19852b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=714395881de768828624bd34df8375629a192b3196b143480fadd4c43c7abe7c
	Nov 15 10:37:48 newest-cni-395885 crio[837]: time="2025-11-15T10:37:48.753338945Z" level=info msg="Started container" PID=1496 containerID=c4fcfd2818ff902a817213191e2c28c317655f510136fd356485e1c085098ed1 description=kube-system/kube-proxy-t26c4/kube-proxy id=2eb46f38-92f1-4d59-9a8e-d0af0a23d930 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fb559bf76c6ce30ca70d0743ea73cb26c4edb39d3bccffe422ca418660664d3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d28679e7b27d1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   714395881de76       kindnet-bqt7r                               kube-system
	c4fcfd2818ff9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   5fb559bf76c6c       kube-proxy-t26c4                            kube-system
	7291ddcc8648c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   82599966a5b4f       kube-scheduler-newest-cni-395885            kube-system
	c8acfa92cecaa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   6e8916b57d268       kube-controller-manager-newest-cni-395885   kube-system
	adabf20f31c57       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   9bd6da8621506       etcd-newest-cni-395885                      kube-system
	179c76320f3a5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   54c5b631b1998       kube-apiserver-newest-cni-395885            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-395885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-395885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=newest-cni-395885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_37_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:37:39 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-395885
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:37:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:37:42 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:37:42 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:37:42 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 10:37:42 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-395885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                9a5c3187-8172-4b90-a319-02f6840f592e
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-395885                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-bqt7r                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-395885             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-395885    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-t26c4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-395885             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-395885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-395885 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-395885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-395885 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-395885 event: Registered Node newest-cni-395885 in Controller
	
	
	==> dmesg <==
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	[Nov15 10:36] overlayfs: idmapped layers are currently not supported
	[Nov15 10:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [adabf20f31c57d481d6dde43e21aaaf608c80c03d8209fc89c6f58ba22441bc1] <==
	{"level":"warn","ts":"2025-11-15T10:37:37.502336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.529807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.563406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.595378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.622998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.663865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.690397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.737751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.762026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.794402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.835027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.867758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.898312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.927924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.962862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:37.983584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:38.048044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:38.069384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:38.097773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:38.131037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:38.171023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:38.203411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:38.235508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:38.258822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:37:38.381139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34448","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:50 up  5:20,  0 user,  load average: 3.35, 3.50, 3.00
	Linux newest-cni-395885 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d28679e7b27d1b954a06b1fcd52d106f2faa137deb2dee7ecfa5228abeca6b12] <==
	I1115 10:37:48.918813       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:37:48.920320       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:37:48.920522       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:37:48.922037       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:37:48.922086       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:37:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:37:49.170691       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:37:49.170716       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:37:49.170724       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:37:49.170844       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [179c76320f3a5645d01921975d21fcbeb926febcdd16fd02745abfaa5ad04773] <==
	I1115 10:37:39.544982       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:37:39.545017       1 policy_source.go:240] refreshing policies
	I1115 10:37:39.560054       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:37:39.562359       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:37:39.571463       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:37:39.572226       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:37:39.575018       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:37:39.732363       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:37:40.096004       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:37:40.105312       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:37:40.105922       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:37:40.860701       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:37:40.926221       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:37:41.077385       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:37:41.086883       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1115 10:37:41.088093       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:37:41.094254       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:37:41.295604       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:37:41.971667       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:37:41.991433       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:37:42.003911       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:37:46.747538       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:37:47.203627       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:37:47.216044       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:37:47.298326       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c8acfa92cecaa66d9cd242290194da3ddcd87e53169b1619c3a7d9f1c110a209] <==
	I1115 10:37:46.421783       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:37:46.434170       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:37:46.438891       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:37:46.440171       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:37:46.440210       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:37:46.440277       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:37:46.440297       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:37:46.440324       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:37:46.441346       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:37:46.443740       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:37:46.446923       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:37:46.448146       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:37:46.456484       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:37:46.464718       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:37:46.467021       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:37:46.473398       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:37:46.473498       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:37:46.473571       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-395885"
	I1115 10:37:46.473651       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:37:46.489721       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:37:46.491099       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:37:46.491149       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:37:46.491460       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:37:46.491469       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:37:46.491475       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c4fcfd2818ff902a817213191e2c28c317655f510136fd356485e1c085098ed1] <==
	I1115 10:37:48.857998       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:37:48.973566       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:37:49.077563       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:37:49.077730       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:37:49.077837       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:37:49.164514       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:37:49.164641       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:37:49.212035       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:37:49.212437       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:37:49.212676       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:37:49.218699       1 config.go:200] "Starting service config controller"
	I1115 10:37:49.218787       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:37:49.218829       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:37:49.218880       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:37:49.218916       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:37:49.218950       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:37:49.219642       1 config.go:309] "Starting node config controller"
	I1115 10:37:49.227306       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:37:49.227360       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:37:49.319634       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:37:49.319677       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:37:49.319718       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7291ddcc8648c767a5e8a6be282347dcc277a2b1717e985111765b8eb7a6a38b] <==
	I1115 10:37:40.104647       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:37:40.106813       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:37:40.106927       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:37:40.107880       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:37:40.107987       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 10:37:40.116166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 10:37:40.117592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:37:40.117712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:37:40.117759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:37:40.117795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:37:40.120906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:37:40.121032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:37:40.121223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:37:40.121328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:37:40.121411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:37:40.121520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:37:40.123819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:37:40.123932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:37:40.124065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:37:40.124156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:37:40.126641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:37:40.126821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:37:40.126859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:37:40.126907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1115 10:37:41.709297       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:37:43 newest-cni-395885 kubelet[1301]: I1115 10:37:43.097938    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-395885" podStartSLOduration=2.097918093 podStartE2EDuration="2.097918093s" podCreationTimestamp="2025-11-15 10:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:37:43.075717909 +0000 UTC m=+1.274318276" watchObservedRunningTime="2025-11-15 10:37:43.097918093 +0000 UTC m=+1.296518452"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.461529    1301 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.462495    1301 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: E1115 10:37:46.840004    1301 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-395885\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-395885' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: E1115 10:37:46.840575    1301 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-t26c4\" is forbidden: User \"system:node:newest-cni-395885\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-395885' and this object" podUID="17d73502-7107-4d36-8af0-187ea6985a47" pod="kube-system/kube-proxy-t26c4"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: E1115 10:37:46.840944    1301 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-395885\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-395885' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.890183    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17d73502-7107-4d36-8af0-187ea6985a47-lib-modules\") pod \"kube-proxy-t26c4\" (UID: \"17d73502-7107-4d36-8af0-187ea6985a47\") " pod="kube-system/kube-proxy-t26c4"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.890305    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/17d73502-7107-4d36-8af0-187ea6985a47-kube-proxy\") pod \"kube-proxy-t26c4\" (UID: \"17d73502-7107-4d36-8af0-187ea6985a47\") " pod="kube-system/kube-proxy-t26c4"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.890325    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17d73502-7107-4d36-8af0-187ea6985a47-xtables-lock\") pod \"kube-proxy-t26c4\" (UID: \"17d73502-7107-4d36-8af0-187ea6985a47\") " pod="kube-system/kube-proxy-t26c4"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.890408    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5scrh\" (UniqueName: \"kubernetes.io/projected/17d73502-7107-4d36-8af0-187ea6985a47-kube-api-access-5scrh\") pod \"kube-proxy-t26c4\" (UID: \"17d73502-7107-4d36-8af0-187ea6985a47\") " pod="kube-system/kube-proxy-t26c4"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.990973    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9452c06-f40d-4e91-be67-17e243f8edd9-xtables-lock\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.991016    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d9452c06-f40d-4e91-be67-17e243f8edd9-cni-cfg\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.991046    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9452c06-f40d-4e91-be67-17e243f8edd9-lib-modules\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:37:46 newest-cni-395885 kubelet[1301]: I1115 10:37:46.991083    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdmkm\" (UniqueName: \"kubernetes.io/projected/d9452c06-f40d-4e91-be67-17e243f8edd9-kube-api-access-gdmkm\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:37:47 newest-cni-395885 kubelet[1301]: E1115 10:37:47.991578    1301 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:37:47 newest-cni-395885 kubelet[1301]: E1115 10:37:47.991678    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17d73502-7107-4d36-8af0-187ea6985a47-kube-proxy podName:17d73502-7107-4d36-8af0-187ea6985a47 nodeName:}" failed. No retries permitted until 2025-11-15 10:37:48.491653787 +0000 UTC m=+6.690254146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/17d73502-7107-4d36-8af0-187ea6985a47-kube-proxy") pod "kube-proxy-t26c4" (UID: "17d73502-7107-4d36-8af0-187ea6985a47") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:37:48 newest-cni-395885 kubelet[1301]: E1115 10:37:48.008790    1301 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:37:48 newest-cni-395885 kubelet[1301]: E1115 10:37:48.008855    1301 projected.go:196] Error preparing data for projected volume kube-api-access-5scrh for pod kube-system/kube-proxy-t26c4: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:37:48 newest-cni-395885 kubelet[1301]: E1115 10:37:48.008940    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17d73502-7107-4d36-8af0-187ea6985a47-kube-api-access-5scrh podName:17d73502-7107-4d36-8af0-187ea6985a47 nodeName:}" failed. No retries permitted until 2025-11-15 10:37:48.508913194 +0000 UTC m=+6.707513553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5scrh" (UniqueName: "kubernetes.io/projected/17d73502-7107-4d36-8af0-187ea6985a47-kube-api-access-5scrh") pod "kube-proxy-t26c4" (UID: "17d73502-7107-4d36-8af0-187ea6985a47") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:37:48 newest-cni-395885 kubelet[1301]: E1115 10:37:48.103331    1301 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:37:48 newest-cni-395885 kubelet[1301]: E1115 10:37:48.103378    1301 projected.go:196] Error preparing data for projected volume kube-api-access-gdmkm for pod kube-system/kindnet-bqt7r: failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:37:48 newest-cni-395885 kubelet[1301]: E1115 10:37:48.103453    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9452c06-f40d-4e91-be67-17e243f8edd9-kube-api-access-gdmkm podName:d9452c06-f40d-4e91-be67-17e243f8edd9 nodeName:}" failed. No retries permitted until 2025-11-15 10:37:48.603430767 +0000 UTC m=+6.802031134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gdmkm" (UniqueName: "kubernetes.io/projected/d9452c06-f40d-4e91-be67-17e243f8edd9-kube-api-access-gdmkm") pod "kindnet-bqt7r" (UID: "d9452c06-f40d-4e91-be67-17e243f8edd9") : failed to sync configmap cache: timed out waiting for the condition
	Nov 15 10:37:48 newest-cni-395885 kubelet[1301]: I1115 10:37:48.512299    1301 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 10:37:49 newest-cni-395885 kubelet[1301]: I1115 10:37:49.055782    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bqt7r" podStartSLOduration=3.055759911 podStartE2EDuration="3.055759911s" podCreationTimestamp="2025-11-15 10:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:37:49.019900483 +0000 UTC m=+7.218500858" watchObservedRunningTime="2025-11-15 10:37:49.055759911 +0000 UTC m=+7.254360270"
	Nov 15 10:37:49 newest-cni-395885 kubelet[1301]: I1115 10:37:49.425758    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t26c4" podStartSLOduration=3.425737194 podStartE2EDuration="3.425737194s" podCreationTimestamp="2025-11-15 10:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:37:49.058574118 +0000 UTC m=+7.257174493" watchObservedRunningTime="2025-11-15 10:37:49.425737194 +0000 UTC m=+7.624337561"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-395885 -n newest-cni-395885
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-395885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-mg7hm storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-395885 describe pod coredns-66bc5c9577-mg7hm storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-395885 describe pod coredns-66bc5c9577-mg7hm storage-provisioner: exit status 1 (121.896631ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-mg7hm" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-395885 describe pod coredns-66bc5c9577-mg7hm storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-303164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-303164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (292.820224ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:37:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-303164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-303164 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-303164 describe deploy/metrics-server -n kube-system: exit status 1 (106.941291ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-303164 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-303164
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-303164:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec",
	        "Created": "2025-11-15T10:36:29.397887261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 715754,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:36:29.464015977Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/hosts",
	        "LogPath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec-json.log",
	        "Name": "/default-k8s-diff-port-303164",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-303164:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-303164",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec",
	                "LowerDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-303164",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-303164/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-303164",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-303164",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-303164",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a894483cad7d18dd4fbce11e2dd844211e53a65afe3d9c413199541cc73828d",
	            "SandboxKey": "/var/run/docker/netns/7a894483cad7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-303164": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:a1:72:a0:19:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04f2761baa0d9af0d0867b1125f2a84414f21796e96d64d92b5c112e2b1380e0",
	                    "EndpointID": "592cddccc4b35805ce25381e5d83d0c2acb2861d6932b9bbed49cc4a0b7a46d0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-303164",
	                        "41c6c089346a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-303164 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-303164 logs -n 25: (1.699457716s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:33 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-907610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │                     │
	│ stop    │ -p no-preload-907610 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:34 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p no-preload-907610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-531596 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-907610 image list --format=json                                                                                                                                                                                                    │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-167523                                                                                                                                                                                                               │ disable-driver-mounts-167523 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ embed-certs-531596 image list --format=json                                                                                                                                                                                                   │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-531596 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-395885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ stop    │ -p newest-cni-395885 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-395885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:37:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:37:53.423701  722503 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:37:53.423834  722503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:53.423846  722503 out.go:374] Setting ErrFile to fd 2...
	I1115 10:37:53.423851  722503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:53.424143  722503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:37:53.424562  722503 out.go:368] Setting JSON to false
	I1115 10:37:53.425537  722503 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19225,"bootTime":1763183849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:37:53.425700  722503 start.go:143] virtualization:  
	I1115 10:37:53.430838  722503 out.go:179] * [newest-cni-395885] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:37:53.434018  722503 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:37:53.434079  722503 notify.go:221] Checking for updates...
	I1115 10:37:53.439975  722503 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:37:53.442977  722503 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:37:53.445995  722503 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:37:53.448896  722503 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:37:53.451760  722503 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:37:53.455336  722503 config.go:182] Loaded profile config "newest-cni-395885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:53.455875  722503 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:37:53.479056  722503 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:37:53.479171  722503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:37:53.541899  722503 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:37:53.527859901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:37:53.542005  722503 docker.go:319] overlay module found
	I1115 10:37:53.545077  722503 out.go:179] * Using the docker driver based on existing profile
	I1115 10:37:53.547970  722503 start.go:309] selected driver: docker
	I1115 10:37:53.547989  722503 start.go:930] validating driver "docker" against &{Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:37:53.548102  722503 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:37:53.548840  722503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:37:53.606473  722503 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:37:53.596924638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:37:53.606803  722503 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:37:53.606836  722503 cni.go:84] Creating CNI manager for ""
	I1115 10:37:53.606901  722503 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:37:53.606946  722503 start.go:353] cluster config:
	{Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:37:53.611961  722503 out.go:179] * Starting "newest-cni-395885" primary control-plane node in "newest-cni-395885" cluster
	I1115 10:37:53.614755  722503 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:37:53.617590  722503 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:37:53.620491  722503 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:37:53.620535  722503 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:37:53.620547  722503 cache.go:65] Caching tarball of preloaded images
	I1115 10:37:53.620577  722503 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:37:53.620642  722503 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:37:53.620653  722503 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:37:53.620789  722503 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/config.json ...
	I1115 10:37:53.639197  722503 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:37:53.639218  722503 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:37:53.639237  722503 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:37:53.639260  722503 start.go:360] acquireMachinesLock for newest-cni-395885: {Name:mka4032c99bad1affc6ad41e6339261f7082d729 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:37:53.639320  722503 start.go:364] duration metric: took 37.398µs to acquireMachinesLock for "newest-cni-395885"
	I1115 10:37:53.639347  722503 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:37:53.639352  722503 fix.go:54] fixHost starting: 
	I1115 10:37:53.639599  722503 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:53.656201  722503 fix.go:112] recreateIfNeeded on newest-cni-395885: state=Stopped err=<nil>
	W1115 10:37:53.656234  722503 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:37:53.659501  722503 out.go:252] * Restarting existing docker container for "newest-cni-395885" ...
	I1115 10:37:53.659591  722503 cli_runner.go:164] Run: docker start newest-cni-395885
	I1115 10:37:53.917368  722503 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:53.943314  722503 kic.go:430] container "newest-cni-395885" state is running.
	I1115 10:37:53.943700  722503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-395885
	I1115 10:37:53.973532  722503 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/config.json ...
	I1115 10:37:53.973826  722503 machine.go:94] provisionDockerMachine start ...
	I1115 10:37:53.973896  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:54.001682  722503 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:54.002034  722503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 10:37:54.002045  722503 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:37:54.002915  722503 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59542->127.0.0.1:33819: read: connection reset by peer
	I1115 10:37:57.157471  722503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-395885
	
	I1115 10:37:57.157493  722503 ubuntu.go:182] provisioning hostname "newest-cni-395885"
	I1115 10:37:57.157557  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:57.175901  722503 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:57.176228  722503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 10:37:57.176245  722503 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-395885 && echo "newest-cni-395885" | sudo tee /etc/hostname
	I1115 10:37:57.336251  722503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-395885
	
	I1115 10:37:57.336382  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:57.353976  722503 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:57.354291  722503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 10:37:57.354315  722503 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-395885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-395885/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-395885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:37:57.506164  722503 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:37:57.506190  722503 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:37:57.506211  722503 ubuntu.go:190] setting up certificates
	I1115 10:37:57.506220  722503 provision.go:84] configureAuth start
	I1115 10:37:57.506282  722503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-395885
	I1115 10:37:57.523533  722503 provision.go:143] copyHostCerts
	I1115 10:37:57.523600  722503 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:37:57.523615  722503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:37:57.523701  722503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:37:57.523795  722503 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:37:57.523800  722503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:37:57.523826  722503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:37:57.523917  722503 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:37:57.523927  722503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:37:57.523950  722503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:37:57.523994  722503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.newest-cni-395885 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-395885]
	I1115 10:37:58.280461  722503 provision.go:177] copyRemoteCerts
	I1115 10:37:58.280580  722503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:37:58.280654  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:58.311429  722503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Nov 15 10:37:46 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:46.393839159Z" level=info msg="Created container dd9c2b0b6385545e7d32da40e53372f9e2c9846eda53f0af7cf4fa877ccc8c0f: kube-system/coredns-66bc5c9577-97gv6/coredns" id=aac29dab-5b37-4acd-9bfe-7ee66acad129 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:46 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:46.394868192Z" level=info msg="Starting container: dd9c2b0b6385545e7d32da40e53372f9e2c9846eda53f0af7cf4fa877ccc8c0f" id=8aeecbf9-8e51-4584-9737-e8bc54bdcea9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:37:46 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:46.3967672Z" level=info msg="Started container" PID=1747 containerID=dd9c2b0b6385545e7d32da40e53372f9e2c9846eda53f0af7cf4fa877ccc8c0f description=kube-system/coredns-66bc5c9577-97gv6/coredns id=8aeecbf9-8e51-4584-9737-e8bc54bdcea9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16e3d3b7e2470d74649f8931311fbe318bd76ac60f443773320834c605cc0890
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.506548048Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8f1cf8d7-982a-4857-932c-3c50599afad8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.50664416Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.53162505Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c16de6447d253e59c0834e76a6132dd72981e25347663bfef7e1a792c2b64728 UID:f3cee1f1-d6f9-47b9-8bb8-b3314819f561 NetNS:/var/run/netns/16e78b55-5350-4fcd-b13b-493480fd6768 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40001325b8}] Aliases:map[]}"
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.531670603Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.548565196Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c16de6447d253e59c0834e76a6132dd72981e25347663bfef7e1a792c2b64728 UID:f3cee1f1-d6f9-47b9-8bb8-b3314819f561 NetNS:/var/run/netns/16e78b55-5350-4fcd-b13b-493480fd6768 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40001325b8}] Aliases:map[]}"
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.548901884Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.552873334Z" level=info msg="Ran pod sandbox c16de6447d253e59c0834e76a6132dd72981e25347663bfef7e1a792c2b64728 with infra container: default/busybox/POD" id=8f1cf8d7-982a-4857-932c-3c50599afad8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.554168666Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c07ab9b-6c77-40a9-a506-bffd64f2ffaa name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.554413935Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8c07ab9b-6c77-40a9-a506-bffd64f2ffaa name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.554529977Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8c07ab9b-6c77-40a9-a506-bffd64f2ffaa name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.555626767Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b546815d-fc04-4dd7-bc2a-b3f7c524328a name=/runtime.v1.ImageService/PullImage
	Nov 15 10:37:49 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:49.558205599Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.017865594Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b546815d-fc04-4dd7-bc2a-b3f7c524328a name=/runtime.v1.ImageService/PullImage
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.018620798Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9d002c65-1ed1-46b2-9b5a-0f64f710ee57 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.022835137Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eff97eeb-ce79-42d4-9af2-b33b5f3f8160 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.031119525Z" level=info msg="Creating container: default/busybox/busybox" id=5f55e8ec-721c-4101-960a-9c097ef5e82f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.031252756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.039017536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.03949717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.064509181Z" level=info msg="Created container f9fb152e0df0b7a552a9bda0055af65ce1acce377cf8688c002c528db32d06eb: default/busybox/busybox" id=5f55e8ec-721c-4101-960a-9c097ef5e82f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.068501627Z" level=info msg="Starting container: f9fb152e0df0b7a552a9bda0055af65ce1acce377cf8688c002c528db32d06eb" id=ec851721-b7e6-48f9-b475-be23bcb485e1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:37:52 default-k8s-diff-port-303164 crio[836]: time="2025-11-15T10:37:52.074072565Z" level=info msg="Started container" PID=1805 containerID=f9fb152e0df0b7a552a9bda0055af65ce1acce377cf8688c002c528db32d06eb description=default/busybox/busybox id=ec851721-b7e6-48f9-b475-be23bcb485e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c16de6447d253e59c0834e76a6132dd72981e25347663bfef7e1a792c2b64728
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	f9fb152e0df0b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   c16de6447d253       busybox                                                default
	dd9c2b0b63855       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   16e3d3b7e2470       coredns-66bc5c9577-97gv6                               kube-system
	483fd7714d665       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   55efe92479c3a       storage-provisioner                                    kube-system
	3de70e42f1786       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   cf290ffd1144c       kindnet-rph85                                          kube-system
	36dac2a40263a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   e6367f963beda       kube-proxy-vmnnc                                       kube-system
	2dc77f50dadcb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   a7e643747e5cf       kube-controller-manager-default-k8s-diff-port-303164   kube-system
	cf14d90a80bd6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   79740ca28a07f       etcd-default-k8s-diff-port-303164                      kube-system
	338a272038ad1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   2c4c11ec6b9b4       kube-apiserver-default-k8s-diff-port-303164            kube-system
	09af4124abe41       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   8cfec60f6b2c6       kube-scheduler-default-k8s-diff-port-303164            kube-system
	
	
	==> coredns [dd9c2b0b6385545e7d32da40e53372f9e2c9846eda53f0af7cf4fa877ccc8c0f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39299 - 40712 "HINFO IN 2789849520918849017.930028009988119986. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.029152678s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-303164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-303164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=default-k8s-diff-port-303164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_36_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:36:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-303164
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:37:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:37:59 +0000   Sat, 15 Nov 2025 10:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:37:59 +0000   Sat, 15 Nov 2025 10:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:37:59 +0000   Sat, 15 Nov 2025 10:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:37:59 +0000   Sat, 15 Nov 2025 10:37:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-303164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                4f8ed4eb-3c24-41b5-a3a9-de151f112693
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-97gv6                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-303164                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-rph85                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-303164             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-303164    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-vmnnc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-303164             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 55s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s   kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s   kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s   kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s   node-controller  Node default-k8s-diff-port-303164 event: Registered Node default-k8s-diff-port-303164 in Controller
	  Normal   NodeReady                15s   kubelet          Node default-k8s-diff-port-303164 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov15 10:15] overlayfs: idmapped layers are currently not supported
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	[Nov15 10:36] overlayfs: idmapped layers are currently not supported
	[Nov15 10:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cf14d90a80bd6a0e181c94d5ea230367fa2e06e817c850a826df34489ad626e8] <==
	{"level":"warn","ts":"2025-11-15T10:36:54.734983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.762126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.778849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.794503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.805339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.825568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.847218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.863702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.882567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.903648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.922110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.947863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.961391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.977330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:54.993425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.015212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.037308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.054216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.065652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.090426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.110313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.136248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.171164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.188700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:36:55.243178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58714","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:38:00 up  5:20,  0 user,  load average: 3.83, 3.60, 3.04
	Linux default-k8s-diff-port-303164 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3de70e42f1786c4c2defbbc8eea0577e640e67c622f67f91b4e7947f89b175e0] <==
	I1115 10:37:05.310352       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:37:05.310624       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:37:05.310741       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:37:05.310752       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:37:05.310764       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:37:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:37:05.444431       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:37:05.444466       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:37:05.444474       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:37:05.510755       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:37:35.444637       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:37:35.511200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1115 10:37:35.511310       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:37:35.511396       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1115 10:37:36.944825       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:37:36.944863       1 metrics.go:72] Registering metrics
	I1115 10:37:36.944928       1 controller.go:711] "Syncing nftables rules"
	I1115 10:37:45.444699       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:37:45.444740       1 main.go:301] handling current node
	I1115 10:37:55.444898       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:37:55.444934       1 main.go:301] handling current node
	
	
	==> kube-apiserver [338a272038ad1a2825e88a479e6046e50015a4e42ea2a39ff142e4be0bb90478] <==
	I1115 10:36:56.111986       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:36:56.114315       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:36:56.114347       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1115 10:36:56.126614       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:36:56.126773       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:36:56.132766       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:36:56.132848       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:36:56.144195       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:36:56.817532       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:36:56.823997       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:36:56.824023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:36:57.587202       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:36:57.637208       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:36:57.713445       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:36:57.723317       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1115 10:36:57.724652       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:36:57.733072       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:36:58.107998       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:36:58.574282       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:36:58.597826       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:36:58.612474       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:37:03.515993       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:37:04.113111       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:37:04.226485       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:37:04.344327       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [2dc77f50dadcb6f6dfb665321b51e5cfe4bfbd9986d823e1d2437993c86db92b] <==
	I1115 10:37:03.185088       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-303164" podCIDRs=["10.244.0.0/24"]
	I1115 10:37:03.187720       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:37:03.194293       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:37:03.203282       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:37:03.210207       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:37:03.210285       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:37:03.210293       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:37:03.210301       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:37:03.210349       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:37:03.210539       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:37:03.210724       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:37:03.211394       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:37:03.211421       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:37:03.211470       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:37:03.211537       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-303164"
	I1115 10:37:03.211573       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:37:03.211601       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:37:03.211621       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:37:03.211635       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:37:03.212327       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:37:03.212588       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:37:03.213091       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:37:03.241321       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:37:03.249269       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:37:48.225272       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [36dac2a40263a76546ee75e4e5bdd4c23f67814706d4ecf58b057dbadcb938cb] <==
	I1115 10:37:04.903849       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:37:05.080516       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:37:05.180660       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:37:05.180687       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:37:05.180750       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:37:05.268601       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:37:05.268662       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:37:05.276506       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:37:05.276801       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:37:05.276831       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:37:05.284070       1 config.go:200] "Starting service config controller"
	I1115 10:37:05.284092       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:37:05.284107       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:37:05.284111       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:37:05.284123       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:37:05.284127       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:37:05.287143       1 config.go:309] "Starting node config controller"
	I1115 10:37:05.287153       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:37:05.287160       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:37:05.385786       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:37:05.385818       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:37:05.385861       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [09af4124abe41ae48813c9fdf3389c4608bc63670ac9ced1619bf33914c6d0a9] <==
	E1115 10:36:56.088672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:36:56.088742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:36:56.088810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:36:56.088874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:36:56.088934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:36:56.088996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:36:56.089058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:36:56.089116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:36:56.089223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:36:56.089293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:36:56.089331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:36:56.089383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:36:56.089837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:36:56.898880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:36:56.923926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:36:56.931578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:36:56.941397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:36:56.975495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:36:57.019403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:36:57.062441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:36:57.100572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:36:57.215017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:36:57.269742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1115 10:36:57.276922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1115 10:37:00.243449       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:36:59 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:36:59.735643    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-303164" podStartSLOduration=1.735622185 podStartE2EDuration="1.735622185s" podCreationTimestamp="2025-11-15 10:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:36:59.735074705 +0000 UTC m=+1.329063853" watchObservedRunningTime="2025-11-15 10:36:59.735622185 +0000 UTC m=+1.329611341"
	Nov 15 10:37:03 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:03.200497    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 10:37:03 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:03.202410    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:04.344734    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqw7b\" (UniqueName: \"kubernetes.io/projected/e61077d0-3c58-4094-ad7e-436ec2f7fb3f-kube-api-access-dqw7b\") pod \"kube-proxy-vmnnc\" (UID: \"e61077d0-3c58-4094-ad7e-436ec2f7fb3f\") " pod="kube-system/kube-proxy-vmnnc"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:04.344774    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e61077d0-3c58-4094-ad7e-436ec2f7fb3f-kube-proxy\") pod \"kube-proxy-vmnnc\" (UID: \"e61077d0-3c58-4094-ad7e-436ec2f7fb3f\") " pod="kube-system/kube-proxy-vmnnc"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:04.344809    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e61077d0-3c58-4094-ad7e-436ec2f7fb3f-xtables-lock\") pod \"kube-proxy-vmnnc\" (UID: \"e61077d0-3c58-4094-ad7e-436ec2f7fb3f\") " pod="kube-system/kube-proxy-vmnnc"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:04.344829    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e61077d0-3c58-4094-ad7e-436ec2f7fb3f-lib-modules\") pod \"kube-proxy-vmnnc\" (UID: \"e61077d0-3c58-4094-ad7e-436ec2f7fb3f\") " pod="kube-system/kube-proxy-vmnnc"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:04.445020    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/30ef2b98-29f3-4a7e-a041-5a6bd98c92ef-cni-cfg\") pod \"kindnet-rph85\" (UID: \"30ef2b98-29f3-4a7e-a041-5a6bd98c92ef\") " pod="kube-system/kindnet-rph85"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:04.549842    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30ef2b98-29f3-4a7e-a041-5a6bd98c92ef-xtables-lock\") pod \"kindnet-rph85\" (UID: \"30ef2b98-29f3-4a7e-a041-5a6bd98c92ef\") " pod="kube-system/kindnet-rph85"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:04.549892    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30ef2b98-29f3-4a7e-a041-5a6bd98c92ef-lib-modules\") pod \"kindnet-rph85\" (UID: \"30ef2b98-29f3-4a7e-a041-5a6bd98c92ef\") " pod="kube-system/kindnet-rph85"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:04.549917    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89mk\" (UniqueName: \"kubernetes.io/projected/30ef2b98-29f3-4a7e-a041-5a6bd98c92ef-kube-api-access-r89mk\") pod \"kindnet-rph85\" (UID: \"30ef2b98-29f3-4a7e-a041-5a6bd98c92ef\") " pod="kube-system/kindnet-rph85"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:04.562927    1314 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: W1115 10:37:04.597449    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/crio-e6367f963beda7c71258d10d2ac08c18fc977094a764830f18fc6dae508a6845 WatchSource:0}: Error finding container e6367f963beda7c71258d10d2ac08c18fc977094a764830f18fc6dae508a6845: Status 404 returned error can't find the container with id e6367f963beda7c71258d10d2ac08c18fc977094a764830f18fc6dae508a6845
	Nov 15 10:37:04 default-k8s-diff-port-303164 kubelet[1314]: W1115 10:37:04.998272    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/crio-cf290ffd1144c6e6cb7bb07b53bf359191039e18b52015516759ed404d9f62ed WatchSource:0}: Error finding container cf290ffd1144c6e6cb7bb07b53bf359191039e18b52015516759ed404d9f62ed: Status 404 returned error can't find the container with id cf290ffd1144c6e6cb7bb07b53bf359191039e18b52015516759ed404d9f62ed
	Nov 15 10:37:05 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:05.745749    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vmnnc" podStartSLOduration=1.745720205 podStartE2EDuration="1.745720205s" podCreationTimestamp="2025-11-15 10:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:37:05.709179744 +0000 UTC m=+7.303168884" watchObservedRunningTime="2025-11-15 10:37:05.745720205 +0000 UTC m=+7.339709345"
	Nov 15 10:37:08 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:08.576136    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rph85" podStartSLOduration=4.576092884 podStartE2EDuration="4.576092884s" podCreationTimestamp="2025-11-15 10:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:37:05.752761757 +0000 UTC m=+7.346750929" watchObservedRunningTime="2025-11-15 10:37:08.576092884 +0000 UTC m=+10.170082024"
	Nov 15 10:37:45 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:45.923439    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:37:46 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:46.065581    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ft2\" (UniqueName: \"kubernetes.io/projected/b6f9a65e-75c6-4783-a879-1dfc86407862-kube-api-access-d6ft2\") pod \"coredns-66bc5c9577-97gv6\" (UID: \"b6f9a65e-75c6-4783-a879-1dfc86407862\") " pod="kube-system/coredns-66bc5c9577-97gv6"
	Nov 15 10:37:46 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:46.065682    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/344be432-6b85-4dea-a1a0-54ce0079d253-tmp\") pod \"storage-provisioner\" (UID: \"344be432-6b85-4dea-a1a0-54ce0079d253\") " pod="kube-system/storage-provisioner"
	Nov 15 10:37:46 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:46.065712    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk29z\" (UniqueName: \"kubernetes.io/projected/344be432-6b85-4dea-a1a0-54ce0079d253-kube-api-access-kk29z\") pod \"storage-provisioner\" (UID: \"344be432-6b85-4dea-a1a0-54ce0079d253\") " pod="kube-system/storage-provisioner"
	Nov 15 10:37:46 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:46.065734    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6f9a65e-75c6-4783-a879-1dfc86407862-config-volume\") pod \"coredns-66bc5c9577-97gv6\" (UID: \"b6f9a65e-75c6-4783-a879-1dfc86407862\") " pod="kube-system/coredns-66bc5c9577-97gv6"
	Nov 15 10:37:46 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:46.819517    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.819499957 podStartE2EDuration="41.819499957s" podCreationTimestamp="2025-11-15 10:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:37:46.798443946 +0000 UTC m=+48.392433086" watchObservedRunningTime="2025-11-15 10:37:46.819499957 +0000 UTC m=+48.413489096"
	Nov 15 10:37:49 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:49.195092    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-97gv6" podStartSLOduration=45.195071408 podStartE2EDuration="45.195071408s" podCreationTimestamp="2025-11-15 10:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:37:46.820928454 +0000 UTC m=+48.414917602" watchObservedRunningTime="2025-11-15 10:37:49.195071408 +0000 UTC m=+50.789060564"
	Nov 15 10:37:49 default-k8s-diff-port-303164 kubelet[1314]: I1115 10:37:49.301731    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkhkj\" (UniqueName: \"kubernetes.io/projected/f3cee1f1-d6f9-47b9-8bb8-b3314819f561-kube-api-access-mkhkj\") pod \"busybox\" (UID: \"f3cee1f1-d6f9-47b9-8bb8-b3314819f561\") " pod="default/busybox"
	Nov 15 10:37:49 default-k8s-diff-port-303164 kubelet[1314]: W1115 10:37:49.551107    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/crio-c16de6447d253e59c0834e76a6132dd72981e25347663bfef7e1a792c2b64728 WatchSource:0}: Error finding container c16de6447d253e59c0834e76a6132dd72981e25347663bfef7e1a792c2b64728: Status 404 returned error can't find the container with id c16de6447d253e59c0834e76a6132dd72981e25347663bfef7e1a792c2b64728
	
	
	==> storage-provisioner [483fd7714d6652b4ca0d7364f3640a192ec213d40d3014835d7d473d9673be44] <==
	I1115 10:37:46.403310       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:37:46.439622       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:37:46.439736       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:37:46.442992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:46.451377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:37:46.451599       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:37:46.453775       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303164_7dc41d8e-6b3b-4388-a329-bfcdae08a182!
	I1115 10:37:46.466365       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b6d7f6c5-5cd5-4e38-9b83-ceab25b500ef", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-303164_7dc41d8e-6b3b-4388-a329-bfcdae08a182 became leader
	W1115 10:37:46.468052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:46.484741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:37:46.554107       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303164_7dc41d8e-6b3b-4388-a329-bfcdae08a182!
	W1115 10:37:48.501843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:48.525794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:50.531294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:50.537874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:52.542562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:52.549978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:54.553676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:54.558549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:56.561586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:56.566192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:58.569179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:37:58.582395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:38:00.586182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:38:00.598298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-303164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-395885 --alsologtostderr -v=1
E1115 10:38:10.231288  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-395885 --alsologtostderr -v=1: exit status 80 (1.858850829s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-395885 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:38:10.070849  724935 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:38:10.071065  724935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:38:10.071072  724935 out.go:374] Setting ErrFile to fd 2...
	I1115 10:38:10.071091  724935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:38:10.071496  724935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:38:10.071866  724935 out.go:368] Setting JSON to false
	I1115 10:38:10.071909  724935 mustload.go:66] Loading cluster: newest-cni-395885
	I1115 10:38:10.072472  724935 config.go:182] Loaded profile config "newest-cni-395885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:10.073111  724935 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:38:10.093278  724935 host.go:66] Checking if "newest-cni-395885" exists ...
	I1115 10:38:10.093794  724935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:38:10.159749  724935 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-15 10:38:10.150295207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:38:10.160629  724935 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-395885 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:38:10.164300  724935 out.go:179] * Pausing node newest-cni-395885 ... 
	I1115 10:38:10.168272  724935 host.go:66] Checking if "newest-cni-395885" exists ...
	I1115 10:38:10.168829  724935 ssh_runner.go:195] Run: systemctl --version
	I1115 10:38:10.168935  724935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:38:10.187556  724935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:38:10.292181  724935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:38:10.305019  724935 pause.go:52] kubelet running: true
	I1115 10:38:10.305111  724935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:38:10.608786  724935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:38:10.608883  724935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:38:10.687656  724935 cri.go:89] found id: "2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61"
	I1115 10:38:10.687682  724935 cri.go:89] found id: "be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262"
	I1115 10:38:10.687688  724935 cri.go:89] found id: "c0b89a9a54e4782d7dd2d073962c3da42d810813dcd395a58854a0e7cbd4fa57"
	I1115 10:38:10.687692  724935 cri.go:89] found id: "5c57ab84f17873a5820a32c264a7b164f1758eeaf5d07f10f42f390ff89b8f0e"
	I1115 10:38:10.687696  724935 cri.go:89] found id: "0ef8b64aeb6b0e2149310a4449a6195f4b20ef81e57dc47596b4eb37353357a7"
	I1115 10:38:10.687699  724935 cri.go:89] found id: "6972587a3df160f4b296d52dabb29cae58f981bb9b1c79934c5379e31c9c1408"
	I1115 10:38:10.687702  724935 cri.go:89] found id: ""
	I1115 10:38:10.687752  724935 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:38:10.698915  724935 retry.go:31] will retry after 225.3351ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:10Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:38:10.925367  724935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:38:10.939653  724935 pause.go:52] kubelet running: false
	I1115 10:38:10.939718  724935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:38:11.153489  724935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:38:11.153565  724935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:38:11.239001  724935 cri.go:89] found id: "2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61"
	I1115 10:38:11.239029  724935 cri.go:89] found id: "be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262"
	I1115 10:38:11.239048  724935 cri.go:89] found id: "c0b89a9a54e4782d7dd2d073962c3da42d810813dcd395a58854a0e7cbd4fa57"
	I1115 10:38:11.239053  724935 cri.go:89] found id: "5c57ab84f17873a5820a32c264a7b164f1758eeaf5d07f10f42f390ff89b8f0e"
	I1115 10:38:11.239057  724935 cri.go:89] found id: "0ef8b64aeb6b0e2149310a4449a6195f4b20ef81e57dc47596b4eb37353357a7"
	I1115 10:38:11.239061  724935 cri.go:89] found id: "6972587a3df160f4b296d52dabb29cae58f981bb9b1c79934c5379e31c9c1408"
	I1115 10:38:11.239065  724935 cri.go:89] found id: ""
	I1115 10:38:11.239115  724935 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:38:11.249964  724935 retry.go:31] will retry after 306.441402ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:11Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:38:11.557589  724935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:38:11.571481  724935 pause.go:52] kubelet running: false
	I1115 10:38:11.571548  724935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:38:11.740215  724935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:38:11.740346  724935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:38:11.828561  724935 cri.go:89] found id: "2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61"
	I1115 10:38:11.828645  724935 cri.go:89] found id: "be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262"
	I1115 10:38:11.828683  724935 cri.go:89] found id: "c0b89a9a54e4782d7dd2d073962c3da42d810813dcd395a58854a0e7cbd4fa57"
	I1115 10:38:11.828723  724935 cri.go:89] found id: "5c57ab84f17873a5820a32c264a7b164f1758eeaf5d07f10f42f390ff89b8f0e"
	I1115 10:38:11.828749  724935 cri.go:89] found id: "0ef8b64aeb6b0e2149310a4449a6195f4b20ef81e57dc47596b4eb37353357a7"
	I1115 10:38:11.828775  724935 cri.go:89] found id: "6972587a3df160f4b296d52dabb29cae58f981bb9b1c79934c5379e31c9c1408"
	I1115 10:38:11.828809  724935 cri.go:89] found id: ""
	I1115 10:38:11.828917  724935 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:38:11.843883  724935 out.go:203] 
	W1115 10:38:11.846766  724935 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:38:11.846785  724935 out.go:285] * 
	* 
	W1115 10:38:11.854110  724935 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:38:11.858909  724935 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-395885 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-395885
helpers_test.go:243: (dbg) docker inspect newest-cni-395885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618",
	        "Created": "2025-11-15T10:37:16.384426052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722632,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:37:53.693163484Z",
	            "FinishedAt": "2025-11-15T10:37:52.687930147Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/hosts",
	        "LogPath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618-json.log",
	        "Name": "/newest-cni-395885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-395885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-395885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618",
	                "LowerDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-395885",
	                "Source": "/var/lib/docker/volumes/newest-cni-395885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-395885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-395885",
	                "name.minikube.sigs.k8s.io": "newest-cni-395885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6334b263c6a11c5764bb0a0a0e4029fc499c080f7ac3bc93dbb835e3767e4d36",
	            "SandboxKey": "/var/run/docker/netns/6334b263c6a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-395885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7c:eb:80:79:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c000d9cd848aa0e1eda0146b58174b6c18a724587543714ebd99f791f9b9348d",
	                    "EndpointID": "56efd3f5355f28ed0811f910e3cf5f49fbe20a2f4d648ac3a039d73202380048",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-395885",
	                        "4aa47ed5c3a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-395885 -n newest-cni-395885
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-395885 -n newest-cni-395885: exit status 2 (366.095908ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-395885 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-395885 logs -n 25: (1.029849803s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-907610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-531596 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-907610 image list --format=json                                                                                                                                                                                                    │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-167523                                                                                                                                                                                                               │ disable-driver-mounts-167523 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ embed-certs-531596 image list --format=json                                                                                                                                                                                                   │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-531596 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-395885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ stop    │ -p newest-cni-395885 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-395885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-303164 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │                     │
	│ image   │ newest-cni-395885 image list --format=json                                                                                                                                                                                                    │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-395885 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:37:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:37:53.423701  722503 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:37:53.423834  722503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:53.423846  722503 out.go:374] Setting ErrFile to fd 2...
	I1115 10:37:53.423851  722503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:37:53.424143  722503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:37:53.424562  722503 out.go:368] Setting JSON to false
	I1115 10:37:53.425537  722503 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19225,"bootTime":1763183849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:37:53.425700  722503 start.go:143] virtualization:  
	I1115 10:37:53.430838  722503 out.go:179] * [newest-cni-395885] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:37:53.434018  722503 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:37:53.434079  722503 notify.go:221] Checking for updates...
	I1115 10:37:53.439975  722503 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:37:53.442977  722503 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:37:53.445995  722503 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:37:53.448896  722503 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:37:53.451760  722503 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:37:53.455336  722503 config.go:182] Loaded profile config "newest-cni-395885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:53.455875  722503 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:37:53.479056  722503 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:37:53.479171  722503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:37:53.541899  722503 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:37:53.527859901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:37:53.542005  722503 docker.go:319] overlay module found
	I1115 10:37:53.545077  722503 out.go:179] * Using the docker driver based on existing profile
	I1115 10:37:53.547970  722503 start.go:309] selected driver: docker
	I1115 10:37:53.547989  722503 start.go:930] validating driver "docker" against &{Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:37:53.548102  722503 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:37:53.548840  722503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:37:53.606473  722503 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:37:53.596924638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:37:53.606803  722503 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:37:53.606836  722503 cni.go:84] Creating CNI manager for ""
	I1115 10:37:53.606901  722503 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:37:53.606946  722503 start.go:353] cluster config:
	{Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:37:53.611961  722503 out.go:179] * Starting "newest-cni-395885" primary control-plane node in "newest-cni-395885" cluster
	I1115 10:37:53.614755  722503 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:37:53.617590  722503 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:37:53.620491  722503 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:37:53.620535  722503 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:37:53.620547  722503 cache.go:65] Caching tarball of preloaded images
	I1115 10:37:53.620577  722503 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:37:53.620642  722503 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:37:53.620653  722503 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:37:53.620789  722503 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/config.json ...
	I1115 10:37:53.639197  722503 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:37:53.639218  722503 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:37:53.639237  722503 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:37:53.639260  722503 start.go:360] acquireMachinesLock for newest-cni-395885: {Name:mka4032c99bad1affc6ad41e6339261f7082d729 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:37:53.639320  722503 start.go:364] duration metric: took 37.398µs to acquireMachinesLock for "newest-cni-395885"
	I1115 10:37:53.639347  722503 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:37:53.639352  722503 fix.go:54] fixHost starting: 
	I1115 10:37:53.639599  722503 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:53.656201  722503 fix.go:112] recreateIfNeeded on newest-cni-395885: state=Stopped err=<nil>
	W1115 10:37:53.656234  722503 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:37:53.659501  722503 out.go:252] * Restarting existing docker container for "newest-cni-395885" ...
	I1115 10:37:53.659591  722503 cli_runner.go:164] Run: docker start newest-cni-395885
	I1115 10:37:53.917368  722503 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:37:53.943314  722503 kic.go:430] container "newest-cni-395885" state is running.
	I1115 10:37:53.943700  722503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-395885
	I1115 10:37:53.973532  722503 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/config.json ...
	I1115 10:37:53.973826  722503 machine.go:94] provisionDockerMachine start ...
	I1115 10:37:53.973896  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:54.001682  722503 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:54.002034  722503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 10:37:54.002045  722503 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:37:54.002915  722503 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59542->127.0.0.1:33819: read: connection reset by peer
	I1115 10:37:57.157471  722503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-395885
	
	I1115 10:37:57.157493  722503 ubuntu.go:182] provisioning hostname "newest-cni-395885"
	I1115 10:37:57.157557  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:57.175901  722503 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:57.176228  722503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 10:37:57.176245  722503 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-395885 && echo "newest-cni-395885" | sudo tee /etc/hostname
	I1115 10:37:57.336251  722503 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-395885
	
	I1115 10:37:57.336382  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:57.353976  722503 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:57.354291  722503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 10:37:57.354315  722503 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-395885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-395885/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-395885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:37:57.506164  722503 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:37:57.506190  722503 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:37:57.506211  722503 ubuntu.go:190] setting up certificates
	I1115 10:37:57.506220  722503 provision.go:84] configureAuth start
	I1115 10:37:57.506282  722503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-395885
	I1115 10:37:57.523533  722503 provision.go:143] copyHostCerts
	I1115 10:37:57.523600  722503 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:37:57.523615  722503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:37:57.523701  722503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:37:57.523795  722503 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:37:57.523800  722503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:37:57.523826  722503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:37:57.523917  722503 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:37:57.523927  722503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:37:57.523950  722503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:37:57.523994  722503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.newest-cni-395885 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-395885]
	I1115 10:37:58.280461  722503 provision.go:177] copyRemoteCerts
	I1115 10:37:58.280580  722503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:37:58.280654  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:58.311429  722503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:58.426257  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:37:58.450916  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:37:58.476343  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:37:58.504504  722503 provision.go:87] duration metric: took 998.268065ms to configureAuth
	I1115 10:37:58.504532  722503 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:37:58.504740  722503 config.go:182] Loaded profile config "newest-cni-395885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:37:58.504846  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:58.523302  722503 main.go:143] libmachine: Using SSH client type: native
	I1115 10:37:58.523607  722503 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33819 <nil> <nil>}
	I1115 10:37:58.523628  722503 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:37:58.928870  722503 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:37:58.928895  722503 machine.go:97] duration metric: took 4.955050895s to provisionDockerMachine
	I1115 10:37:58.928906  722503 start.go:293] postStartSetup for "newest-cni-395885" (driver="docker")
	I1115 10:37:58.928917  722503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:37:58.928977  722503 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:37:58.929019  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:58.963173  722503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:59.074765  722503 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:37:59.080415  722503 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:37:59.080449  722503 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:37:59.080461  722503 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:37:59.080518  722503 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:37:59.080606  722503 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:37:59.080715  722503 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:37:59.090186  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:37:59.119731  722503 start.go:296] duration metric: took 190.807616ms for postStartSetup
	I1115 10:37:59.119825  722503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:37:59.119885  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:59.175451  722503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:59.278579  722503 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:37:59.283425  722503 fix.go:56] duration metric: took 5.644064833s for fixHost
	I1115 10:37:59.283448  722503 start.go:83] releasing machines lock for "newest-cni-395885", held for 5.644114431s
	I1115 10:37:59.283526  722503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-395885
	I1115 10:37:59.306742  722503 ssh_runner.go:195] Run: cat /version.json
	I1115 10:37:59.306794  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:59.306820  722503 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:37:59.306911  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:37:59.340418  722503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:59.340958  722503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:37:59.485731  722503 ssh_runner.go:195] Run: systemctl --version
	I1115 10:37:59.584740  722503 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:37:59.651763  722503 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:37:59.656194  722503 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:37:59.656269  722503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:37:59.664961  722503 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:37:59.664982  722503 start.go:496] detecting cgroup driver to use...
	I1115 10:37:59.665013  722503 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:37:59.665068  722503 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:37:59.686674  722503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:37:59.703543  722503 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:37:59.703652  722503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:37:59.720936  722503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:37:59.734579  722503 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:37:59.895885  722503 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:38:00.081033  722503 docker.go:234] disabling docker service ...
	I1115 10:38:00.081129  722503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:38:00.101331  722503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:38:00.132715  722503 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:38:00.383938  722503 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:38:00.560275  722503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:38:00.574749  722503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:38:00.598200  722503 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:38:00.598270  722503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:00.613069  722503 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:38:00.613141  722503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:00.624387  722503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:00.637061  722503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:00.649990  722503 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:38:00.659702  722503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:00.672685  722503 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:00.686439  722503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:00.698328  722503 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:38:00.708423  722503 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:38:00.717417  722503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:00.861784  722503 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:38:01.030129  722503 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:38:01.030188  722503 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:38:01.034674  722503 start.go:564] Will wait 60s for crictl version
	I1115 10:38:01.034731  722503 ssh_runner.go:195] Run: which crictl
	I1115 10:38:01.038616  722503 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:38:01.071949  722503 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:38:01.072041  722503 ssh_runner.go:195] Run: crio --version
	I1115 10:38:01.111980  722503 ssh_runner.go:195] Run: crio --version
	I1115 10:38:01.157077  722503 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:38:01.160096  722503 cli_runner.go:164] Run: docker network inspect newest-cni-395885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:38:01.179272  722503 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:38:01.183894  722503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:01.200747  722503 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:38:01.203701  722503 kubeadm.go:884] updating cluster {Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:38:01.203887  722503 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:38:01.203983  722503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:01.245642  722503 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:01.245665  722503 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:38:01.245723  722503 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:01.286907  722503 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:01.286927  722503 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:38:01.286935  722503 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:38:01.287043  722503 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-395885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:38:01.287130  722503 ssh_runner.go:195] Run: crio config
	I1115 10:38:01.354735  722503 cni.go:84] Creating CNI manager for ""
	I1115 10:38:01.354754  722503 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:38:01.354776  722503 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:38:01.354800  722503 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-395885 NodeName:newest-cni-395885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:38:01.355187  722503 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-395885"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:38:01.355281  722503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:38:01.370997  722503 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:38:01.371070  722503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:38:01.379820  722503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1115 10:38:01.403964  722503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:38:01.428851  722503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1115 10:38:01.459713  722503 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:38:01.464194  722503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:01.474456  722503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:01.677084  722503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:38:01.701019  722503 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885 for IP: 192.168.76.2
	I1115 10:38:01.701038  722503 certs.go:195] generating shared ca certs ...
	I1115 10:38:01.701061  722503 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:01.701217  722503 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:38:01.701257  722503 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:38:01.701263  722503 certs.go:257] generating profile certs ...
	I1115 10:38:01.701341  722503 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/client.key
	I1115 10:38:01.701401  722503 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.key.128d9837
	I1115 10:38:01.701437  722503 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.key
	I1115 10:38:01.701552  722503 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:38:01.701581  722503 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:38:01.701589  722503 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:38:01.701716  722503 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:38:01.701763  722503 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:38:01.701787  722503 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:38:01.701831  722503 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:38:01.702394  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:38:01.732005  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:38:01.777254  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:38:01.824176  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:38:01.851936  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:38:01.876467  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:38:01.908157  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:38:01.952388  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/newest-cni-395885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:38:01.999466  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:38:02.070748  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:38:02.108742  722503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:38:02.155075  722503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:38:02.178676  722503 ssh_runner.go:195] Run: openssl version
	I1115 10:38:02.187411  722503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:38:02.207260  722503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:02.211742  722503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:02.211861  722503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:02.287501  722503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:38:02.297288  722503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:38:02.306811  722503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:38:02.311469  722503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:38:02.311535  722503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:38:02.366521  722503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:38:02.374506  722503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:38:02.382995  722503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:38:02.386752  722503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:38:02.386859  722503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:38:02.427362  722503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:38:02.437389  722503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:38:02.442226  722503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:38:02.485997  722503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:38:02.548180  722503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:38:02.598877  722503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:38:02.652380  722503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:38:02.713653  722503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:38:02.769396  722503 kubeadm.go:401] StartCluster: {Name:newest-cni-395885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-395885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:38:02.769524  722503 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:38:02.769638  722503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:38:02.807579  722503 cri.go:89] found id: "c0b89a9a54e4782d7dd2d073962c3da42d810813dcd395a58854a0e7cbd4fa57"
	I1115 10:38:02.807650  722503 cri.go:89] found id: "5c57ab84f17873a5820a32c264a7b164f1758eeaf5d07f10f42f390ff89b8f0e"
	I1115 10:38:02.807670  722503 cri.go:89] found id: "0ef8b64aeb6b0e2149310a4449a6195f4b20ef81e57dc47596b4eb37353357a7"
	I1115 10:38:02.807695  722503 cri.go:89] found id: "6972587a3df160f4b296d52dabb29cae58f981bb9b1c79934c5379e31c9c1408"
	I1115 10:38:02.807728  722503 cri.go:89] found id: ""
	I1115 10:38:02.807803  722503 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:38:02.827632  722503 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:02Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:38:02.827756  722503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:38:02.836055  722503 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:38:02.836121  722503 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:38:02.836200  722503 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:38:02.845939  722503 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:38:02.846535  722503 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-395885" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:38:02.846844  722503 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-514793/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-395885" cluster setting kubeconfig missing "newest-cni-395885" context setting]
	I1115 10:38:02.847325  722503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:02.848792  722503 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:38:02.857139  722503 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 10:38:02.857207  722503 kubeadm.go:602] duration metric: took 21.066324ms to restartPrimaryControlPlane
	I1115 10:38:02.857231  722503 kubeadm.go:403] duration metric: took 87.845217ms to StartCluster
	I1115 10:38:02.857274  722503 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:02.857349  722503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:38:02.858320  722503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:02.858572  722503 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:38:02.858884  722503 config.go:182] Loaded profile config "newest-cni-395885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:02.858947  722503 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:38:02.859294  722503 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-395885"
	I1115 10:38:02.859339  722503 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-395885"
	W1115 10:38:02.859364  722503 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:38:02.859442  722503 host.go:66] Checking if "newest-cni-395885" exists ...
	I1115 10:38:02.859958  722503 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:38:02.860125  722503 addons.go:70] Setting dashboard=true in profile "newest-cni-395885"
	I1115 10:38:02.860161  722503 addons.go:239] Setting addon dashboard=true in "newest-cni-395885"
	W1115 10:38:02.860201  722503 addons.go:248] addon dashboard should already be in state true
	I1115 10:38:02.860240  722503 host.go:66] Checking if "newest-cni-395885" exists ...
	I1115 10:38:02.860499  722503 addons.go:70] Setting default-storageclass=true in profile "newest-cni-395885"
	I1115 10:38:02.860514  722503 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-395885"
	I1115 10:38:02.860791  722503 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:38:02.861201  722503 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:38:02.864518  722503 out.go:179] * Verifying Kubernetes components...
	I1115 10:38:02.868108  722503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:02.894575  722503 addons.go:239] Setting addon default-storageclass=true in "newest-cni-395885"
	W1115 10:38:02.894614  722503 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:38:02.894639  722503 host.go:66] Checking if "newest-cni-395885" exists ...
	I1115 10:38:02.895043  722503 cli_runner.go:164] Run: docker container inspect newest-cni-395885 --format={{.State.Status}}
	I1115 10:38:02.917837  722503 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:38:02.920852  722503 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:38:02.920874  722503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:38:02.920940  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:38:02.941469  722503 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:38:02.944539  722503 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:38:02.945872  722503 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:38:02.945888  722503 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:38:02.945946  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:38:02.947417  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:38:02.947443  722503 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:38:02.947504  722503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-395885
	I1115 10:38:02.976769  722503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:38:03.011229  722503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:38:03.017906  722503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33819 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/newest-cni-395885/id_rsa Username:docker}
	I1115 10:38:03.212898  722503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:38:03.226228  722503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:38:03.282196  722503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:38:03.339485  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:38:03.339510  722503 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:38:03.449560  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:38:03.449582  722503 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:38:03.543633  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:38:03.543655  722503 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:38:03.587891  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:38:03.587911  722503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:38:03.631310  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:38:03.631336  722503 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:38:03.668520  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:38:03.668542  722503 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:38:03.718592  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:38:03.718616  722503 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:38:03.750121  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:38:03.750152  722503 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:38:03.766618  722503 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:38:03.766642  722503 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:38:03.783740  722503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:38:08.736802  722503 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.510541451s)
	I1115 10:38:08.736851  722503 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:38:08.736913  722503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:38:08.736984  722503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.454766312s)
	I1115 10:38:08.737349  722503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.953581451s)
	I1115 10:38:08.737590  722503 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.524667227s)
	I1115 10:38:08.740508  722503 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-395885 addons enable metrics-server
	
	I1115 10:38:08.760120  722503 api_server.go:72] duration metric: took 5.901494202s to wait for apiserver process to appear ...
	I1115 10:38:08.760141  722503 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:38:08.760160  722503 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:38:08.775606  722503 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:38:08.775631  722503 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:38:08.777953  722503 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1115 10:38:08.781031  722503 addons.go:515] duration metric: took 5.92207519s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:38:09.261239  722503 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:38:09.269256  722503 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:38:09.270365  722503 api_server.go:141] control plane version: v1.34.1
	I1115 10:38:09.270394  722503 api_server.go:131] duration metric: took 510.245226ms to wait for apiserver health ...
	I1115 10:38:09.270404  722503 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:38:09.273503  722503 system_pods.go:59] 8 kube-system pods found
	I1115 10:38:09.273537  722503 system_pods.go:61] "coredns-66bc5c9577-mg7hm" [a7b030f3-a8f3-4baf-ba33-cfa56768bc15] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:38:09.273546  722503 system_pods.go:61] "etcd-newest-cni-395885" [9ebe1396-d892-4a22-a83c-01ae69b07011] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:38:09.273551  722503 system_pods.go:61] "kindnet-bqt7r" [d9452c06-f40d-4e91-be67-17e243f8edd9] Running
	I1115 10:38:09.273561  722503 system_pods.go:61] "kube-apiserver-newest-cni-395885" [762efbd5-c6b4-4c20-9e93-8f9a68fe2b8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:38:09.273571  722503 system_pods.go:61] "kube-controller-manager-newest-cni-395885" [ed67cb82-b823-42a3-8afa-f2e050c12292] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:38:09.273585  722503 system_pods.go:61] "kube-proxy-t26c4" [17d73502-7107-4d36-8af0-187ea6985a47] Running
	I1115 10:38:09.273591  722503 system_pods.go:61] "kube-scheduler-newest-cni-395885" [216e9e79-e3fe-485d-aaac-03d61448730a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:38:09.273631  722503 system_pods.go:61] "storage-provisioner" [fee63e42-26d6-4e9d-b080-c433640e6144] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:38:09.273636  722503 system_pods.go:74] duration metric: took 3.227053ms to wait for pod list to return data ...
	I1115 10:38:09.273649  722503 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:38:09.275968  722503 default_sa.go:45] found service account: "default"
	I1115 10:38:09.275989  722503 default_sa.go:55] duration metric: took 2.326155ms for default service account to be created ...
	I1115 10:38:09.276001  722503 kubeadm.go:587] duration metric: took 6.417381559s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:38:09.276041  722503 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:38:09.278366  722503 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:38:09.278407  722503 node_conditions.go:123] node cpu capacity is 2
	I1115 10:38:09.278419  722503 node_conditions.go:105] duration metric: took 2.371987ms to run NodePressure ...
	I1115 10:38:09.278430  722503 start.go:242] waiting for startup goroutines ...
	I1115 10:38:09.278443  722503 start.go:247] waiting for cluster config update ...
	I1115 10:38:09.278454  722503 start.go:256] writing updated cluster config ...
	I1115 10:38:09.278747  722503 ssh_runner.go:195] Run: rm -f paused
	I1115 10:38:09.334155  722503 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:38:09.339346  722503 out.go:179] * Done! kubectl is now configured to use "newest-cni-395885" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.219642222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.227646396Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=04b2d791-65ad-42db-a9f4-6656559a6eaa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.232352628Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-t26c4/POD" id=15939534-16f0-4f73-aa0f-c953441d20cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.232434488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.236240019Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=15939534-16f0-4f73-aa0f-c953441d20cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.239608581Z" level=info msg="Ran pod sandbox b9f637ee1792cf1ab10e874a7fb8d5f9ede526779b3b930574d226e378cd9104 with infra container: kube-system/kindnet-bqt7r/POD" id=04b2d791-65ad-42db-a9f4-6656559a6eaa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.244443917Z" level=info msg="Ran pod sandbox 7f9bc0876a9fa2416c3cc27b25d2edcf2a1ddc20a8ae1c76bc2f3c6646db0d92 with infra container: kube-system/kube-proxy-t26c4/POD" id=15939534-16f0-4f73-aa0f-c953441d20cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.246728275Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f17dd8bd-59a8-48a3-99db-4e38ba6b0401 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.247058063Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=933d36f9-d12f-4660-9b4a-1df982483f02 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.248413119Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fbb5686e-5106-4e6f-a339-f902d6849bdf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.251319441Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3a1f8808-9111-4582-a593-143a8adc269c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.25277312Z" level=info msg="Creating container: kube-system/kindnet-bqt7r/kindnet-cni" id=db63b21b-3da8-402a-b03a-02f3c9ed3472 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.253227614Z" level=info msg="Creating container: kube-system/kube-proxy-t26c4/kube-proxy" id=77775b89-1c34-4728-9ff2-d309eab3cb4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.25327832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.255452167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.259613478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.260091373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.26752479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.268006204Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.302889479Z" level=info msg="Created container be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262: kube-system/kindnet-bqt7r/kindnet-cni" id=db63b21b-3da8-402a-b03a-02f3c9ed3472 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.303622053Z" level=info msg="Starting container: be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262" id=a1fc09ff-fa84-4870-89e8-0da01581184c name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.309927592Z" level=info msg="Created container 2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61: kube-system/kube-proxy-t26c4/kube-proxy" id=77775b89-1c34-4728-9ff2-d309eab3cb4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.310077537Z" level=info msg="Started container" PID=1059 containerID=be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262 description=kube-system/kindnet-bqt7r/kindnet-cni id=a1fc09ff-fa84-4870-89e8-0da01581184c name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9f637ee1792cf1ab10e874a7fb8d5f9ede526779b3b930574d226e378cd9104
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.312110809Z" level=info msg="Starting container: 2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61" id=8614c95c-1915-44e2-a69a-28491dc418ba name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.31445677Z" level=info msg="Started container" PID=1061 containerID=2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61 description=kube-system/kube-proxy-t26c4/kube-proxy id=8614c95c-1915-44e2-a69a-28491dc418ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f9bc0876a9fa2416c3cc27b25d2edcf2a1ddc20a8ae1c76bc2f3c6646db0d92
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2ddcb107379a3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   7f9bc0876a9fa       kube-proxy-t26c4                            kube-system
	be4fa02320a86       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   b9f637ee1792c       kindnet-bqt7r                               kube-system
	c0b89a9a54e47       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   ad6167f3f3e07       etcd-newest-cni-395885                      kube-system
	5c57ab84f1787       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   3001fee8ba470       kube-scheduler-newest-cni-395885            kube-system
	0ef8b64aeb6b0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   d25d9b3cc169f       kube-controller-manager-newest-cni-395885   kube-system
	6972587a3df16       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   cbd95eb5ee27b       kube-apiserver-newest-cni-395885            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-395885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-395885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=newest-cni-395885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_37_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:37:39 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-395885
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:38:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:38:07 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:38:07 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:38:07 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 10:38:07 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-395885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                9a5c3187-8172-4b90-a319-02f6840f592e
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-395885                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         30s
	  kube-system                 kindnet-bqt7r                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-395885             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-395885    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-t26c4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-395885             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node newest-cni-395885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientPID
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-395885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     30s                kubelet          Node newest-cni-395885 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  30s                kubelet          Node newest-cni-395885 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           26s                node-controller  Node newest-cni-395885 event: Registered Node newest-cni-395885 in Controller
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node newest-cni-395885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10s (x8 over 10s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           1s                 node-controller  Node newest-cni-395885 event: Registered Node newest-cni-395885 in Controller
	
	
	==> dmesg <==
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	[Nov15 10:36] overlayfs: idmapped layers are currently not supported
	[Nov15 10:37] overlayfs: idmapped layers are currently not supported
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c0b89a9a54e4782d7dd2d073962c3da42d810813dcd395a58854a0e7cbd4fa57] <==
	{"level":"warn","ts":"2025-11-15T10:38:06.325662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.348337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.375083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.391009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.411161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.430590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.446833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.461532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.481273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.511864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.522816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.541437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.562784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.575871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.612184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.614661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.648971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.661321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.685093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.695242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.719721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.737143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.754491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.777207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.876265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:38:13 up  5:20,  0 user,  load average: 3.93, 3.63, 3.05
	Linux newest-cni-395885 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262] <==
	I1115 10:38:08.414607       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:38:08.415066       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:38:08.416298       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:38:08.416372       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:38:08.417204       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:38:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:38:08.624334       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:38:08.624352       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:38:08.624361       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:38:08.624677       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6972587a3df160f4b296d52dabb29cae58f981bb9b1c79934c5379e31c9c1408] <==
	I1115 10:38:07.662089       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:38:07.669352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:38:07.678586       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:38:07.678708       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:38:07.678802       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:38:07.678835       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:38:07.685834       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:38:07.685870       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:38:07.685878       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:38:07.685884       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:38:07.685889       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:38:07.706935       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:38:07.707480       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:38:08.044429       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:38:08.346917       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:38:08.415738       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:38:08.453613       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:38:08.469388       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:38:08.476475       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:38:08.621667       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.60.7"}
	I1115 10:38:08.719688       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.140.145"}
	I1115 10:38:11.084442       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:38:11.427091       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:38:11.580501       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:38:11.640396       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0ef8b64aeb6b0e2149310a4449a6195f4b20ef81e57dc47596b4eb37353357a7] <==
	I1115 10:38:11.111647       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:38:11.111723       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:38:11.111805       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-395885"
	I1115 10:38:11.111851       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:38:11.112569       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:38:11.116864       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:38:11.116939       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:38:11.121702       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:38:11.121753       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:38:11.122107       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:38:11.122169       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:38:11.122408       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:38:11.125734       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:38:11.127126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:38:11.127173       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:38:11.127360       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:38:11.133205       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:38:11.133259       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:38:11.133301       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:38:11.133334       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:38:11.133355       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:38:11.133364       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:38:11.133785       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:38:11.152141       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:38:11.156372       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61] <==
	I1115 10:38:08.574705       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:38:08.847643       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:38:08.959766       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:38:08.959873       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:38:08.959955       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:38:08.984234       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:38:08.984303       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:38:08.988555       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:38:08.989026       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:38:08.989087       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:38:08.990604       1 config.go:200] "Starting service config controller"
	I1115 10:38:08.990671       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:38:08.990715       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:38:08.990754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:38:08.990794       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:38:08.990830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:38:08.991659       1 config.go:309] "Starting node config controller"
	I1115 10:38:08.991707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:38:08.991747       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:38:09.091298       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:38:09.091313       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:38:09.091336       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5c57ab84f17873a5820a32c264a7b164f1758eeaf5d07f10f42f390ff89b8f0e] <==
	I1115 10:38:05.952758       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:38:07.565823       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:38:07.565933       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:38:07.565992       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:38:07.566025       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:38:07.671752       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:38:07.671781       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:38:07.686752       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:38:07.686836       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:38:07.686856       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:38:07.686873       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:38:07.788756       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.737286     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-395885\" already exists" pod="kube-system/kube-scheduler-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.737652     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.737718     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.737747     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.738811     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.751501     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-395885\" already exists" pod="kube-system/kube-scheduler-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.751677     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.772304     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-395885\" already exists" pod="kube-system/etcd-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.772492     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.780446     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-395885\" already exists" pod="kube-system/kube-apiserver-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.780622     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.792079     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-395885\" already exists" pod="kube-system/kube-controller-manager-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.798332     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.812916     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-395885\" already exists" pod="kube-system/kube-apiserver-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.910519     729 apiserver.go:52] "Watching apiserver"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.974131     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040625     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9452c06-f40d-4e91-be67-17e243f8edd9-lib-modules\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040672     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d9452c06-f40d-4e91-be67-17e243f8edd9-cni-cfg\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040729     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9452c06-f40d-4e91-be67-17e243f8edd9-xtables-lock\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040748     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17d73502-7107-4d36-8af0-187ea6985a47-xtables-lock\") pod \"kube-proxy-t26c4\" (UID: \"17d73502-7107-4d36-8af0-187ea6985a47\") " pod="kube-system/kube-proxy-t26c4"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040764     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17d73502-7107-4d36-8af0-187ea6985a47-lib-modules\") pod \"kube-proxy-t26c4\" (UID: \"17d73502-7107-4d36-8af0-187ea6985a47\") " pod="kube-system/kube-proxy-t26c4"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.059556     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 10:38:10 newest-cni-395885 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:38:10 newest-cni-395885 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:38:10 newest-cni-395885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-395885 -n newest-cni-395885
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-395885 -n newest-cni-395885: exit status 2 (363.794869ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-395885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-mg7hm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w7fnv kubernetes-dashboard-855c9754f9-bm8s5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-395885 describe pod coredns-66bc5c9577-mg7hm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w7fnv kubernetes-dashboard-855c9754f9-bm8s5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-395885 describe pod coredns-66bc5c9577-mg7hm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w7fnv kubernetes-dashboard-855c9754f9-bm8s5: exit status 1 (116.42891ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-mg7hm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-w7fnv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bm8s5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-395885 describe pod coredns-66bc5c9577-mg7hm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w7fnv kubernetes-dashboard-855c9754f9-bm8s5: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-395885
helpers_test.go:243: (dbg) docker inspect newest-cni-395885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618",
	        "Created": "2025-11-15T10:37:16.384426052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722632,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:37:53.693163484Z",
	            "FinishedAt": "2025-11-15T10:37:52.687930147Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/hosts",
	        "LogPath": "/var/lib/docker/containers/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618/4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618-json.log",
	        "Name": "/newest-cni-395885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-395885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-395885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aa47ed5c3a57fe14c544e7961b16ad5f9fb6f853226f336f8b02944d8696618",
	                "LowerDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b3d892d00fed1d426108c95146e51a3f2c7dbfcf37861f9534f09b9e124f9934/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-395885",
	                "Source": "/var/lib/docker/volumes/newest-cni-395885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-395885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-395885",
	                "name.minikube.sigs.k8s.io": "newest-cni-395885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6334b263c6a11c5764bb0a0a0e4029fc499c080f7ac3bc93dbb835e3767e4d36",
	            "SandboxKey": "/var/run/docker/netns/6334b263c6a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-395885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7c:eb:80:79:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c000d9cd848aa0e1eda0146b58174b6c18a724587543714ebd99f791f9b9348d",
	                    "EndpointID": "56efd3f5355f28ed0811f910e3cf5f49fbe20a2f4d648ac3a039d73202380048",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-395885",
	                        "4aa47ed5c3a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-395885 -n newest-cni-395885
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-395885 -n newest-cni-395885: exit status 2 (468.600825ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-395885 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-395885 logs -n 25: (1.454404744s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │                     │
	│ stop    │ -p embed-certs-531596 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ addons  │ enable dashboard -p embed-certs-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:35 UTC │
	│ start   │ -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:35 UTC │ 15 Nov 25 10:36 UTC │
	│ image   │ no-preload-907610 image list --format=json                                                                                                                                                                                                    │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-167523                                                                                                                                                                                                               │ disable-driver-mounts-167523 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ embed-certs-531596 image list --format=json                                                                                                                                                                                                   │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-531596 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-395885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ stop    │ -p newest-cni-395885 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-395885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-303164 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ image   │ newest-cni-395885 image list --format=json                                                                                                                                                                                                    │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-395885 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-303164 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:38:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:38:14.331002  725691 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:38:14.335328  725691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:38:14.335383  725691 out.go:374] Setting ErrFile to fd 2...
	I1115 10:38:14.335404  725691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:38:14.335769  725691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:38:14.336716  725691 out.go:368] Setting JSON to false
	I1115 10:38:14.337781  725691 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19246,"bootTime":1763183849,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:38:14.337881  725691 start.go:143] virtualization:  
	I1115 10:38:14.341728  725691 out.go:179] * [default-k8s-diff-port-303164] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:38:14.345673  725691 notify.go:221] Checking for updates...
	I1115 10:38:14.350806  725691 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:38:14.354258  725691 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:38:14.361727  725691 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:38:14.364716  725691 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:38:14.368099  725691 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:38:14.371034  725691 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:38:14.374491  725691 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:14.375035  725691 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:38:14.412931  725691 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:38:14.413038  725691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:38:14.504658  725691 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 10:38:14.495708187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:38:14.504775  725691 docker.go:319] overlay module found
	I1115 10:38:14.509137  725691 out.go:179] * Using the docker driver based on existing profile
	
	
	==> CRI-O <==
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.219642222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.227646396Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=04b2d791-65ad-42db-a9f4-6656559a6eaa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.232352628Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-t26c4/POD" id=15939534-16f0-4f73-aa0f-c953441d20cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.232434488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.236240019Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=15939534-16f0-4f73-aa0f-c953441d20cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.239608581Z" level=info msg="Ran pod sandbox b9f637ee1792cf1ab10e874a7fb8d5f9ede526779b3b930574d226e378cd9104 with infra container: kube-system/kindnet-bqt7r/POD" id=04b2d791-65ad-42db-a9f4-6656559a6eaa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.244443917Z" level=info msg="Ran pod sandbox 7f9bc0876a9fa2416c3cc27b25d2edcf2a1ddc20a8ae1c76bc2f3c6646db0d92 with infra container: kube-system/kube-proxy-t26c4/POD" id=15939534-16f0-4f73-aa0f-c953441d20cf name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.246728275Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f17dd8bd-59a8-48a3-99db-4e38ba6b0401 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.247058063Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=933d36f9-d12f-4660-9b4a-1df982483f02 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.248413119Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fbb5686e-5106-4e6f-a339-f902d6849bdf name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.251319441Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3a1f8808-9111-4582-a593-143a8adc269c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.25277312Z" level=info msg="Creating container: kube-system/kindnet-bqt7r/kindnet-cni" id=db63b21b-3da8-402a-b03a-02f3c9ed3472 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.253227614Z" level=info msg="Creating container: kube-system/kube-proxy-t26c4/kube-proxy" id=77775b89-1c34-4728-9ff2-d309eab3cb4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.25327832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.255452167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.259613478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.260091373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.26752479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.268006204Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.302889479Z" level=info msg="Created container be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262: kube-system/kindnet-bqt7r/kindnet-cni" id=db63b21b-3da8-402a-b03a-02f3c9ed3472 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.303622053Z" level=info msg="Starting container: be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262" id=a1fc09ff-fa84-4870-89e8-0da01581184c name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.309927592Z" level=info msg="Created container 2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61: kube-system/kube-proxy-t26c4/kube-proxy" id=77775b89-1c34-4728-9ff2-d309eab3cb4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.310077537Z" level=info msg="Started container" PID=1059 containerID=be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262 description=kube-system/kindnet-bqt7r/kindnet-cni id=a1fc09ff-fa84-4870-89e8-0da01581184c name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9f637ee1792cf1ab10e874a7fb8d5f9ede526779b3b930574d226e378cd9104
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.312110809Z" level=info msg="Starting container: 2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61" id=8614c95c-1915-44e2-a69a-28491dc418ba name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:38:08 newest-cni-395885 crio[614]: time="2025-11-15T10:38:08.31445677Z" level=info msg="Started container" PID=1061 containerID=2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61 description=kube-system/kube-proxy-t26c4/kube-proxy id=8614c95c-1915-44e2-a69a-28491dc418ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f9bc0876a9fa2416c3cc27b25d2edcf2a1ddc20a8ae1c76bc2f3c6646db0d92
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2ddcb107379a3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   7f9bc0876a9fa       kube-proxy-t26c4                            kube-system
	be4fa02320a86       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   b9f637ee1792c       kindnet-bqt7r                               kube-system
	c0b89a9a54e47       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   ad6167f3f3e07       etcd-newest-cni-395885                      kube-system
	5c57ab84f1787       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   3001fee8ba470       kube-scheduler-newest-cni-395885            kube-system
	0ef8b64aeb6b0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   d25d9b3cc169f       kube-controller-manager-newest-cni-395885   kube-system
	6972587a3df16       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   cbd95eb5ee27b       kube-apiserver-newest-cni-395885            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-395885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-395885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=newest-cni-395885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_37_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:37:39 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-395885
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:38:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:38:07 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:38:07 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:38:07 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 10:38:07 +0000   Sat, 15 Nov 2025 10:37:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-395885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                9a5c3187-8172-4b90-a319-02f6840f592e
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-395885                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-bqt7r                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-395885             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-395885    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-t26c4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-395885             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node newest-cni-395885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-395885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-395885 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-395885 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-395885 event: Registered Node newest-cni-395885 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-395885 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-395885 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-395885 event: Registered Node newest-cni-395885 in Controller
	
	
	==> dmesg <==
	[ +19.729205] overlayfs: idmapped layers are currently not supported
	[ +12.015205] overlayfs: idmapped layers are currently not supported
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	[Nov15 10:36] overlayfs: idmapped layers are currently not supported
	[Nov15 10:37] overlayfs: idmapped layers are currently not supported
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c0b89a9a54e4782d7dd2d073962c3da42d810813dcd395a58854a0e7cbd4fa57] <==
	{"level":"warn","ts":"2025-11-15T10:38:06.325662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.348337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.375083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.391009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.411161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.430590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.446833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.461532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.481273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.511864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.522816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.541437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.562784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.575871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.612184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.614661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.648971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.661321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.685093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.695242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.719721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.737143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.754491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.777207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:06.876265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:38:15 up  5:20,  0 user,  load average: 3.86, 3.62, 3.05
	Linux newest-cni-395885 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [be4fa02320a86df3d7fae324277bf481d4bbe9f736c2fbf5ee2d1759714c2262] <==
	I1115 10:38:08.414607       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:38:08.415066       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:38:08.416298       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:38:08.416372       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:38:08.417204       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:38:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:38:08.624334       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:38:08.624352       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:38:08.624361       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:38:08.624677       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6972587a3df160f4b296d52dabb29cae58f981bb9b1c79934c5379e31c9c1408] <==
	I1115 10:38:07.662089       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:38:07.669352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:38:07.678586       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:38:07.678708       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:38:07.678802       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:38:07.678835       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:38:07.685834       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:38:07.685870       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:38:07.685878       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:38:07.685884       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:38:07.685889       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:38:07.706935       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:38:07.707480       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:38:08.044429       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:38:08.346917       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:38:08.415738       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:38:08.453613       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:38:08.469388       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:38:08.476475       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:38:08.621667       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.60.7"}
	I1115 10:38:08.719688       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.140.145"}
	I1115 10:38:11.084442       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:38:11.427091       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:38:11.580501       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:38:11.640396       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0ef8b64aeb6b0e2149310a4449a6195f4b20ef81e57dc47596b4eb37353357a7] <==
	I1115 10:38:11.111647       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:38:11.111723       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:38:11.111805       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-395885"
	I1115 10:38:11.111851       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:38:11.112569       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:38:11.116864       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:38:11.116939       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:38:11.121702       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:38:11.121753       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:38:11.122107       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:38:11.122169       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:38:11.122408       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:38:11.125734       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:38:11.127126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:38:11.127173       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:38:11.127360       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:38:11.133205       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:38:11.133259       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 10:38:11.133301       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 10:38:11.133334       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 10:38:11.133355       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 10:38:11.133364       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 10:38:11.133785       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:38:11.152141       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:38:11.156372       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [2ddcb107379a34704c19cc94f66c76eda94a57434feb651e377c2590cd93ee61] <==
	I1115 10:38:08.574705       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:38:08.847643       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:38:08.959766       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:38:08.959873       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:38:08.959955       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:38:08.984234       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:38:08.984303       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:38:08.988555       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:38:08.989026       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:38:08.989087       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:38:08.990604       1 config.go:200] "Starting service config controller"
	I1115 10:38:08.990671       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:38:08.990715       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:38:08.990754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:38:08.990794       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:38:08.990830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:38:08.991659       1 config.go:309] "Starting node config controller"
	I1115 10:38:08.991707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:38:08.991747       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:38:09.091298       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:38:09.091313       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:38:09.091336       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5c57ab84f17873a5820a32c264a7b164f1758eeaf5d07f10f42f390ff89b8f0e] <==
	I1115 10:38:05.952758       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:38:07.565823       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:38:07.565933       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:38:07.565992       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:38:07.566025       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:38:07.671752       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:38:07.671781       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:38:07.686752       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:38:07.686836       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:38:07.686856       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:38:07.686873       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:38:07.788756       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.737286     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-395885\" already exists" pod="kube-system/kube-scheduler-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.737652     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.737718     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.737747     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.738811     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.751501     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-395885\" already exists" pod="kube-system/kube-scheduler-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.751677     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.772304     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-395885\" already exists" pod="kube-system/etcd-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.772492     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.780446     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-395885\" already exists" pod="kube-system/kube-apiserver-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.780622     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.792079     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-395885\" already exists" pod="kube-system/kube-controller-manager-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.798332     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: E1115 10:38:07.812916     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-395885\" already exists" pod="kube-system/kube-apiserver-newest-cni-395885"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.910519     729 apiserver.go:52] "Watching apiserver"
	Nov 15 10:38:07 newest-cni-395885 kubelet[729]: I1115 10:38:07.974131     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040625     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9452c06-f40d-4e91-be67-17e243f8edd9-lib-modules\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040672     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d9452c06-f40d-4e91-be67-17e243f8edd9-cni-cfg\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040729     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9452c06-f40d-4e91-be67-17e243f8edd9-xtables-lock\") pod \"kindnet-bqt7r\" (UID: \"d9452c06-f40d-4e91-be67-17e243f8edd9\") " pod="kube-system/kindnet-bqt7r"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040748     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17d73502-7107-4d36-8af0-187ea6985a47-xtables-lock\") pod \"kube-proxy-t26c4\" (UID: \"17d73502-7107-4d36-8af0-187ea6985a47\") " pod="kube-system/kube-proxy-t26c4"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.040764     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17d73502-7107-4d36-8af0-187ea6985a47-lib-modules\") pod \"kube-proxy-t26c4\" (UID: \"17d73502-7107-4d36-8af0-187ea6985a47\") " pod="kube-system/kube-proxy-t26c4"
	Nov 15 10:38:08 newest-cni-395885 kubelet[729]: I1115 10:38:08.059556     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 15 10:38:10 newest-cni-395885 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:38:10 newest-cni-395885 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:38:10 newest-cni-395885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-395885 -n newest-cni-395885
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-395885 -n newest-cni-395885: exit status 2 (348.268902ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-395885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-mg7hm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w7fnv kubernetes-dashboard-855c9754f9-bm8s5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-395885 describe pod coredns-66bc5c9577-mg7hm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w7fnv kubernetes-dashboard-855c9754f9-bm8s5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-395885 describe pod coredns-66bc5c9577-mg7hm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w7fnv kubernetes-dashboard-855c9754f9-bm8s5: exit status 1 (80.192129ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-mg7hm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-w7fnv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bm8s5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-395885 describe pod coredns-66bc5c9577-mg7hm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w7fnv kubernetes-dashboard-855c9754f9-bm8s5: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-303164 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-303164 --alsologtostderr -v=1: exit status 80 (1.807625214s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-303164 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:39:18.427389  730986 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:39:18.427504  730986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:39:18.427513  730986 out.go:374] Setting ErrFile to fd 2...
	I1115 10:39:18.427518  730986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:39:18.427793  730986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:39:18.428045  730986 out.go:368] Setting JSON to false
	I1115 10:39:18.428072  730986 mustload.go:66] Loading cluster: default-k8s-diff-port-303164
	I1115 10:39:18.428532  730986 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:39:18.429013  730986 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:39:18.446822  730986 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:39:18.447214  730986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:39:18.510250  730986 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 10:39:18.500935868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:39:18.510969  730986 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-303164 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:39:18.517829  730986 out.go:179] * Pausing node default-k8s-diff-port-303164 ... 
	I1115 10:39:18.521264  730986 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:39:18.521583  730986 ssh_runner.go:195] Run: systemctl --version
	I1115 10:39:18.521669  730986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:39:18.538636  730986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:39:18.644767  730986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:39:18.667325  730986 pause.go:52] kubelet running: true
	I1115 10:39:18.667393  730986 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:39:18.929923  730986 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:39:18.930066  730986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:39:19.021047  730986 cri.go:89] found id: "7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce"
	I1115 10:39:19.021080  730986 cri.go:89] found id: "eb40357445059cc14c5f7b7baf983424338a1f3a04ec773e4e548001a06069e0"
	I1115 10:39:19.021086  730986 cri.go:89] found id: "acc8eca44366ae83668276140d7ec0a035ccf8963b6889fe220fec65c5943fe4"
	I1115 10:39:19.021092  730986 cri.go:89] found id: "5cb75eb11bbd0b60da9e1d96609a3e36b9d59a6bbe55060fc6e322be02ff99ed"
	I1115 10:39:19.021096  730986 cri.go:89] found id: "f55f11e9f461788084a143dcfa22c6414008456df58d4f0cfdfcfdea76b378d2"
	I1115 10:39:19.021099  730986 cri.go:89] found id: "2c910a1bc98190b14a76fa88f7d5067fd7b09b18629ae3b1acf0e8f9394dac1f"
	I1115 10:39:19.021102  730986 cri.go:89] found id: "0530aabdcbb5a21f8ba0a88ad2e2bf5546365f9556577f075c171b1c817f1960"
	I1115 10:39:19.021105  730986 cri.go:89] found id: "a98fb964f4025f8c4a4027fd4b096cc84c2f581727a83f5729d88f17aa2c2b22"
	I1115 10:39:19.021108  730986 cri.go:89] found id: "6b4d8bfc8b089aa1a7d9c75dabaec5b65337237d2c6f29d3f00908a4c3dcd6bf"
	I1115 10:39:19.021115  730986 cri.go:89] found id: "cf76876a687d20a38bd839d035595084bd6d94c1f4dfe4203497cd9f62dfc593"
	I1115 10:39:19.021121  730986 cri.go:89] found id: "2e643fed5aa284e7891d963b79c953d7c3d1f44044faa4dd0248eb955adca97f"
	I1115 10:39:19.021124  730986 cri.go:89] found id: ""
	I1115 10:39:19.021174  730986 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:39:19.038872  730986 retry.go:31] will retry after 131.582075ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:39:19Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:39:19.171268  730986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:39:19.186062  730986 pause.go:52] kubelet running: false
	I1115 10:39:19.186183  730986 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:39:19.361543  730986 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:39:19.361730  730986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:39:19.428922  730986 cri.go:89] found id: "7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce"
	I1115 10:39:19.428948  730986 cri.go:89] found id: "eb40357445059cc14c5f7b7baf983424338a1f3a04ec773e4e548001a06069e0"
	I1115 10:39:19.428954  730986 cri.go:89] found id: "acc8eca44366ae83668276140d7ec0a035ccf8963b6889fe220fec65c5943fe4"
	I1115 10:39:19.428958  730986 cri.go:89] found id: "5cb75eb11bbd0b60da9e1d96609a3e36b9d59a6bbe55060fc6e322be02ff99ed"
	I1115 10:39:19.428962  730986 cri.go:89] found id: "f55f11e9f461788084a143dcfa22c6414008456df58d4f0cfdfcfdea76b378d2"
	I1115 10:39:19.428965  730986 cri.go:89] found id: "2c910a1bc98190b14a76fa88f7d5067fd7b09b18629ae3b1acf0e8f9394dac1f"
	I1115 10:39:19.428969  730986 cri.go:89] found id: "0530aabdcbb5a21f8ba0a88ad2e2bf5546365f9556577f075c171b1c817f1960"
	I1115 10:39:19.428972  730986 cri.go:89] found id: "a98fb964f4025f8c4a4027fd4b096cc84c2f581727a83f5729d88f17aa2c2b22"
	I1115 10:39:19.428976  730986 cri.go:89] found id: "6b4d8bfc8b089aa1a7d9c75dabaec5b65337237d2c6f29d3f00908a4c3dcd6bf"
	I1115 10:39:19.428985  730986 cri.go:89] found id: "cf76876a687d20a38bd839d035595084bd6d94c1f4dfe4203497cd9f62dfc593"
	I1115 10:39:19.428996  730986 cri.go:89] found id: "2e643fed5aa284e7891d963b79c953d7c3d1f44044faa4dd0248eb955adca97f"
	I1115 10:39:19.428999  730986 cri.go:89] found id: ""
	I1115 10:39:19.429050  730986 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:39:19.440268  730986 retry.go:31] will retry after 442.681553ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:39:19Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:39:19.883743  730986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:39:19.896675  730986 pause.go:52] kubelet running: false
	I1115 10:39:19.896737  730986 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:39:20.066153  730986 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:39:20.066256  730986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:39:20.146414  730986 cri.go:89] found id: "7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce"
	I1115 10:39:20.146435  730986 cri.go:89] found id: "eb40357445059cc14c5f7b7baf983424338a1f3a04ec773e4e548001a06069e0"
	I1115 10:39:20.146440  730986 cri.go:89] found id: "acc8eca44366ae83668276140d7ec0a035ccf8963b6889fe220fec65c5943fe4"
	I1115 10:39:20.146444  730986 cri.go:89] found id: "5cb75eb11bbd0b60da9e1d96609a3e36b9d59a6bbe55060fc6e322be02ff99ed"
	I1115 10:39:20.146447  730986 cri.go:89] found id: "f55f11e9f461788084a143dcfa22c6414008456df58d4f0cfdfcfdea76b378d2"
	I1115 10:39:20.146469  730986 cri.go:89] found id: "2c910a1bc98190b14a76fa88f7d5067fd7b09b18629ae3b1acf0e8f9394dac1f"
	I1115 10:39:20.146473  730986 cri.go:89] found id: "0530aabdcbb5a21f8ba0a88ad2e2bf5546365f9556577f075c171b1c817f1960"
	I1115 10:39:20.146476  730986 cri.go:89] found id: "a98fb964f4025f8c4a4027fd4b096cc84c2f581727a83f5729d88f17aa2c2b22"
	I1115 10:39:20.146479  730986 cri.go:89] found id: "6b4d8bfc8b089aa1a7d9c75dabaec5b65337237d2c6f29d3f00908a4c3dcd6bf"
	I1115 10:39:20.146485  730986 cri.go:89] found id: "cf76876a687d20a38bd839d035595084bd6d94c1f4dfe4203497cd9f62dfc593"
	I1115 10:39:20.146488  730986 cri.go:89] found id: "2e643fed5aa284e7891d963b79c953d7c3d1f44044faa4dd0248eb955adca97f"
	I1115 10:39:20.146491  730986 cri.go:89] found id: ""
	I1115 10:39:20.146544  730986 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:39:20.162971  730986 out.go:203] 
	W1115 10:39:20.166826  730986 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:39:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:39:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:39:20.166854  730986 out.go:285] * 
	* 
	W1115 10:39:20.174380  730986 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:39:20.178076  730986 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-303164 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-303164
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-303164:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec",
	        "Created": "2025-11-15T10:36:29.397887261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 725907,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:38:14.697318691Z",
	            "FinishedAt": "2025-11-15T10:38:13.64804261Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/hosts",
	        "LogPath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec-json.log",
	        "Name": "/default-k8s-diff-port-303164",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-303164:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-303164",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec",
	                "LowerDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-303164",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-303164/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-303164",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-303164",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-303164",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fbf5a14ded46f8708b285538f00084af13ce4c5533afa43904c02e4c38a75618",
	            "SandboxKey": "/var/run/docker/netns/fbf5a14ded46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-303164": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d1:16:37:0d:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04f2761baa0d9af0d0867b1125f2a84414f21796e96d64d92b5c112e2b1380e0",
	                    "EndpointID": "6947c36c88fd14f2bc10f156861231b0edc6748e197c3681401c839dba6851ab",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-303164",
	                        "41c6c089346a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164: exit status 2 (376.306636ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-303164 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-303164 logs -n 25: (1.290874835s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-167523                                                                                                                                                                                                               │ disable-driver-mounts-167523 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ embed-certs-531596 image list --format=json                                                                                                                                                                                                   │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-531596 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-395885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ stop    │ -p newest-cni-395885 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-395885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-303164 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ image   │ newest-cni-395885 image list --format=json                                                                                                                                                                                                    │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-395885 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-303164 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:39 UTC │
	│ delete  │ -p newest-cni-395885                                                                                                                                                                                                                          │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ delete  │ -p newest-cni-395885                                                                                                                                                                                                                          │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ start   │ -p auto-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-864099                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │                     │
	│ image   │ default-k8s-diff-port-303164 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:39 UTC │ 15 Nov 25 10:39 UTC │
	│ pause   │ -p default-k8s-diff-port-303164 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:38:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:38:18.845427  726986 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:38:18.845580  726986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:38:18.845592  726986 out.go:374] Setting ErrFile to fd 2...
	I1115 10:38:18.845645  726986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:38:18.846079  726986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:38:18.847025  726986 out.go:368] Setting JSON to false
	I1115 10:38:18.847990  726986 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19250,"bootTime":1763183849,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:38:18.848086  726986 start.go:143] virtualization:  
	I1115 10:38:18.852023  726986 out.go:179] * [auto-864099] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:38:18.856380  726986 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:38:18.856506  726986 notify.go:221] Checking for updates...
	I1115 10:38:18.862849  726986 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:38:18.865994  726986 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:38:18.869213  726986 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:38:18.872356  726986 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:38:18.875433  726986 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:38:18.879025  726986 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:18.879194  726986 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:38:18.930121  726986 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:38:18.930260  726986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:38:19.030920  726986 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 10:38:19.021085175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:38:19.031019  726986 docker.go:319] overlay module found
	I1115 10:38:19.034321  726986 out.go:179] * Using the docker driver based on user configuration
	I1115 10:38:19.037347  726986 start.go:309] selected driver: docker
	I1115 10:38:19.037370  726986 start.go:930] validating driver "docker" against <nil>
	I1115 10:38:19.037384  726986 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:38:19.038133  726986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:38:19.125560  726986 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 10:38:19.115382642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:38:19.125745  726986 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:38:19.125994  726986 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:38:19.128976  726986 out.go:179] * Using Docker driver with root privileges
	I1115 10:38:19.131835  726986 cni.go:84] Creating CNI manager for ""
	I1115 10:38:19.131898  726986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:38:19.131918  726986 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:38:19.132009  726986 start.go:353] cluster config:
	{Name:auto-864099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1115 10:38:19.135175  726986 out.go:179] * Starting "auto-864099" primary control-plane node in "auto-864099" cluster
	I1115 10:38:19.138058  726986 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:38:19.141042  726986 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:38:19.143887  726986 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:38:19.143935  726986 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:38:19.143950  726986 cache.go:65] Caching tarball of preloaded images
	I1115 10:38:19.143960  726986 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:38:19.144037  726986 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:38:19.144047  726986 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:38:19.144153  726986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/config.json ...
	I1115 10:38:19.144170  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/config.json: {Name:mk7fb890c383a78db32389d094b5012c030c4f5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:19.169856  726986 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:38:19.169874  726986 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:38:19.169886  726986 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:38:19.169908  726986 start.go:360] acquireMachinesLock for auto-864099: {Name:mk2d9e06aa8943c9d5c5df210e24fc9695013696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:38:19.170007  726986 start.go:364] duration metric: took 84.183µs to acquireMachinesLock for "auto-864099"
	I1115 10:38:19.170031  726986 start.go:93] Provisioning new machine with config: &{Name:auto-864099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:38:19.170098  726986 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:38:14.657509  725691 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-303164" ...
	I1115 10:38:14.657697  725691 cli_runner.go:164] Run: docker start default-k8s-diff-port-303164
	I1115 10:38:14.969509  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:15.000397  725691 kic.go:430] container "default-k8s-diff-port-303164" state is running.
	I1115 10:38:15.000882  725691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:38:15.031629  725691 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/config.json ...
	I1115 10:38:15.031896  725691 machine.go:94] provisionDockerMachine start ...
	I1115 10:38:15.031962  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:15.056129  725691 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:15.056466  725691 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 10:38:15.056477  725691 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:38:15.057301  725691 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:38:18.226449  725691 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303164
	
	I1115 10:38:18.226527  725691 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-303164"
	I1115 10:38:18.226647  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:18.253339  725691 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:18.253683  725691 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 10:38:18.253702  725691 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-303164 && echo "default-k8s-diff-port-303164" | sudo tee /etc/hostname
	I1115 10:38:18.428713  725691 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303164
	
	I1115 10:38:18.428790  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:18.454959  725691 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:18.455253  725691 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 10:38:18.455270  725691 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-303164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-303164/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-303164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:38:18.621446  725691 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:38:18.621475  725691 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:38:18.621501  725691 ubuntu.go:190] setting up certificates
	I1115 10:38:18.621509  725691 provision.go:84] configureAuth start
	I1115 10:38:18.621568  725691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:38:18.644961  725691 provision.go:143] copyHostCerts
	I1115 10:38:18.645012  725691 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:38:18.645024  725691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:38:18.645104  725691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:38:18.645195  725691 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:38:18.645201  725691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:38:18.645227  725691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:38:18.645289  725691 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:38:18.645295  725691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:38:18.645324  725691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:38:18.645433  725691 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-303164 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-303164 localhost minikube]
	I1115 10:38:18.745912  725691 provision.go:177] copyRemoteCerts
	I1115 10:38:18.746178  725691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:38:18.746249  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:18.781112  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:18.894389  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:38:18.916058  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:38:18.948978  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:38:18.970273  725691 provision.go:87] duration metric: took 348.739486ms to configureAuth
	I1115 10:38:18.970301  725691 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:38:18.970484  725691 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:18.970599  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:18.995026  725691 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:18.995351  725691 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 10:38:18.995376  725691 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:38:19.403809  725691 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:38:19.403834  725691 machine.go:97] duration metric: took 4.371927008s to provisionDockerMachine
	I1115 10:38:19.403845  725691 start.go:293] postStartSetup for "default-k8s-diff-port-303164" (driver="docker")
	I1115 10:38:19.403856  725691 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:38:19.403917  725691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:38:19.403973  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:19.428897  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:19.538034  725691 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:38:19.544732  725691 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:38:19.544765  725691 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:38:19.544776  725691 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:38:19.544852  725691 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:38:19.544984  725691 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:38:19.545148  725691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:38:19.553235  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:38:19.586638  725691 start.go:296] duration metric: took 182.77711ms for postStartSetup
	I1115 10:38:19.586773  725691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:38:19.586842  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:19.606409  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:19.727919  725691 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:38:19.734556  725691 fix.go:56] duration metric: took 5.104712086s for fixHost
	I1115 10:38:19.734583  725691 start.go:83] releasing machines lock for "default-k8s-diff-port-303164", held for 5.104763113s
	I1115 10:38:19.734685  725691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:38:19.752152  725691 ssh_runner.go:195] Run: cat /version.json
	I1115 10:38:19.752184  725691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:38:19.752213  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:19.752246  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:19.772136  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:19.794296  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:19.991746  725691 ssh_runner.go:195] Run: systemctl --version
	I1115 10:38:19.998826  725691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:38:20.065816  725691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:38:20.075085  725691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:38:20.075167  725691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:38:20.091094  725691 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:38:20.091119  725691 start.go:496] detecting cgroup driver to use...
	I1115 10:38:20.091243  725691 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:38:20.091347  725691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:38:20.120227  725691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:38:20.148059  725691 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:38:20.148139  725691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:38:20.171307  725691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:38:20.201708  725691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:38:20.387437  725691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:38:20.541932  725691 docker.go:234] disabling docker service ...
	I1115 10:38:20.542013  725691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:38:20.559404  725691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:38:20.573458  725691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:38:20.731284  725691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:38:20.888792  725691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:38:20.904325  725691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:38:20.921752  725691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:38:20.921885  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.931333  725691 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:38:20.931447  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.940667  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.949473  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.958801  725691 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:38:20.967022  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.976296  725691 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.985324  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.994603  725691 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:38:21.003989  725691 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:38:21.013407  725691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:21.166893  725691 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:38:21.975651  725691 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:38:21.975762  725691 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:38:21.980007  725691 start.go:564] Will wait 60s for crictl version
	I1115 10:38:21.980101  725691 ssh_runner.go:195] Run: which crictl
	I1115 10:38:21.983856  725691 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:38:22.020954  725691 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:38:22.021073  725691 ssh_runner.go:195] Run: crio --version
	I1115 10:38:22.063664  725691 ssh_runner.go:195] Run: crio --version
	I1115 10:38:22.105255  725691 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:38:22.108378  725691 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:38:22.131729  725691 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:38:22.135726  725691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:22.147377  725691 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:38:22.147500  725691 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:38:22.147552  725691 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:22.182490  725691 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:22.182517  725691 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:38:22.182572  725691 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:22.220442  725691 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:22.220466  725691 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:38:22.220475  725691 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:38:22.220567  725691 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-303164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:38:22.220654  725691 ssh_runner.go:195] Run: crio config
	I1115 10:38:22.295775  725691 cni.go:84] Creating CNI manager for ""
	I1115 10:38:22.295825  725691 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:38:22.295842  725691 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:38:22.295876  725691 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-303164 NodeName:default-k8s-diff-port-303164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:38:22.296045  725691 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-303164"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:38:22.296139  725691 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:38:22.304393  725691 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:38:22.304477  725691 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:38:22.311937  725691 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:38:22.324440  725691 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:38:22.337676  725691 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:38:22.351118  725691 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:38:22.355316  725691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:22.364921  725691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:22.514928  725691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:38:22.530967  725691 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164 for IP: 192.168.85.2
	I1115 10:38:22.530993  725691 certs.go:195] generating shared ca certs ...
	I1115 10:38:22.531010  725691 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:22.531140  725691 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:38:22.531189  725691 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:38:22.531202  725691 certs.go:257] generating profile certs ...
	I1115 10:38:22.531285  725691 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.key
	I1115 10:38:22.531385  725691 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336
	I1115 10:38:22.531425  725691 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key
	I1115 10:38:22.531531  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:38:22.531569  725691 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:38:22.531582  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:38:22.531607  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:38:22.531632  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:38:22.531655  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:38:22.531705  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:38:22.532716  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:38:22.570700  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:38:22.589472  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:38:22.617410  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:38:22.644795  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:38:22.698962  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:38:22.737489  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:38:22.794912  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:38:22.848801  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:38:22.881148  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:38:22.899661  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:38:22.918600  725691 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:38:22.934237  725691 ssh_runner.go:195] Run: openssl version
	I1115 10:38:22.940930  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:38:22.949855  725691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:22.961442  725691 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:22.961590  725691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:23.003650  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:38:23.013337  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:38:23.022479  725691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:38:23.026627  725691 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:38:23.026739  725691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:38:23.071279  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:38:23.079808  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:38:23.088687  725691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:38:23.092824  725691 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:38:23.092933  725691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:38:23.137999  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:38:23.146899  725691 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:38:23.151204  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:38:23.195362  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:38:23.237357  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:38:23.317117  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:38:23.383581  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:38:23.505540  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:38:23.643970  725691 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:38:23.644083  725691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:38:23.644166  725691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:38:23.718376  725691 cri.go:89] found id: "2c910a1bc98190b14a76fa88f7d5067fd7b09b18629ae3b1acf0e8f9394dac1f"
	I1115 10:38:23.718408  725691 cri.go:89] found id: "0530aabdcbb5a21f8ba0a88ad2e2bf5546365f9556577f075c171b1c817f1960"
	I1115 10:38:23.718416  725691 cri.go:89] found id: "a98fb964f4025f8c4a4027fd4b096cc84c2f581727a83f5729d88f17aa2c2b22"
	I1115 10:38:23.718421  725691 cri.go:89] found id: "6b4d8bfc8b089aa1a7d9c75dabaec5b65337237d2c6f29d3f00908a4c3dcd6bf"
	I1115 10:38:23.718429  725691 cri.go:89] found id: ""
	I1115 10:38:23.718493  725691 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:38:23.746955  725691 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:23Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:38:23.747054  725691 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:38:23.761854  725691 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:38:23.761876  725691 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:38:23.761939  725691 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:38:23.773988  725691 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:38:23.774430  725691 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-303164" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:38:23.774552  725691 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-514793/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-303164" cluster setting kubeconfig missing "default-k8s-diff-port-303164" context setting]
	I1115 10:38:23.774931  725691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:23.777326  725691 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:38:23.788529  725691 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:38:23.788602  725691 kubeadm.go:602] duration metric: took 26.719064ms to restartPrimaryControlPlane
	I1115 10:38:23.788637  725691 kubeadm.go:403] duration metric: took 144.685211ms to StartCluster
	I1115 10:38:23.788681  725691 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:23.788760  725691 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:38:23.789424  725691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:23.789761  725691 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:38:23.790123  725691 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:23.790197  725691 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:38:23.790296  725691 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-303164"
	I1115 10:38:23.790323  725691 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-303164"
	W1115 10:38:23.790344  725691 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:38:23.790389  725691 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:38:23.790848  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:23.791051  725691 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-303164"
	I1115 10:38:23.791091  725691 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-303164"
	W1115 10:38:23.791115  725691 addons.go:248] addon dashboard should already be in state true
	I1115 10:38:23.791174  725691 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:38:23.791349  725691 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-303164"
	I1115 10:38:23.791363  725691 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-303164"
	I1115 10:38:23.791614  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:23.792070  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:23.805285  725691 out.go:179] * Verifying Kubernetes components...
	I1115 10:38:19.173514  726986 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:38:19.173785  726986 start.go:159] libmachine.API.Create for "auto-864099" (driver="docker")
	I1115 10:38:19.173825  726986 client.go:173] LocalClient.Create starting
	I1115 10:38:19.173878  726986 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem
	I1115 10:38:19.173908  726986 main.go:143] libmachine: Decoding PEM data...
	I1115 10:38:19.173921  726986 main.go:143] libmachine: Parsing certificate...
	I1115 10:38:19.173970  726986 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem
	I1115 10:38:19.173986  726986 main.go:143] libmachine: Decoding PEM data...
	I1115 10:38:19.173995  726986 main.go:143] libmachine: Parsing certificate...
	I1115 10:38:19.174363  726986 cli_runner.go:164] Run: docker network inspect auto-864099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:38:19.191724  726986 cli_runner.go:211] docker network inspect auto-864099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:38:19.191801  726986 network_create.go:284] running [docker network inspect auto-864099] to gather additional debugging logs...
	I1115 10:38:19.191817  726986 cli_runner.go:164] Run: docker network inspect auto-864099
	W1115 10:38:19.219353  726986 cli_runner.go:211] docker network inspect auto-864099 returned with exit code 1
	I1115 10:38:19.219379  726986 network_create.go:287] error running [docker network inspect auto-864099]: docker network inspect auto-864099: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-864099 not found
	I1115 10:38:19.219432  726986 network_create.go:289] output of [docker network inspect auto-864099]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-864099 not found
	
	** /stderr **
	I1115 10:38:19.219528  726986 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:38:19.242771  726986 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-03fcaf6cb6bf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:21:e0:cf:fc:c1} reservation:<nil>}
	I1115 10:38:19.243116  726986 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a5248bd30780 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:00:a1:23:de:dd} reservation:<nil>}
	I1115 10:38:19.243450  726986 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae071823fd3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b9:7d:07:12:bf} reservation:<nil>}
	I1115 10:38:19.243850  726986 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195d1d0}
	I1115 10:38:19.243867  726986 network_create.go:124] attempt to create docker network auto-864099 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 10:38:19.243927  726986 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-864099 auto-864099
	I1115 10:38:19.328585  726986 network_create.go:108] docker network auto-864099 192.168.76.0/24 created
	I1115 10:38:19.328613  726986 kic.go:121] calculated static IP "192.168.76.2" for the "auto-864099" container
	I1115 10:38:19.328697  726986 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:38:19.349874  726986 cli_runner.go:164] Run: docker volume create auto-864099 --label name.minikube.sigs.k8s.io=auto-864099 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:38:19.370347  726986 oci.go:103] Successfully created a docker volume auto-864099
	I1115 10:38:19.370430  726986 cli_runner.go:164] Run: docker run --rm --name auto-864099-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-864099 --entrypoint /usr/bin/test -v auto-864099:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:38:20.024081  726986 oci.go:107] Successfully prepared a docker volume auto-864099
	I1115 10:38:20.024169  726986 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:38:20.024180  726986 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:38:20.024254  726986 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-864099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:38:23.813408  725691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:23.884672  725691 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-303164"
	W1115 10:38:23.884694  725691 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:38:23.884718  725691 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:38:23.885124  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:23.889659  725691 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:38:23.897582  725691 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:38:23.897828  725691 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:38:23.907738  725691 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:38:23.907763  725691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:38:23.907831  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:23.907991  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:38:23.907999  725691 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:38:23.908032  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:23.953895  725691 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:38:23.953916  725691 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:38:23.953980  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:23.997248  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:24.011092  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:24.023555  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:24.225029  725691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:38:24.273379  725691 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-303164" to be "Ready" ...
	I1115 10:38:24.307994  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:38:24.308021  725691 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:38:24.223975  726986 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-864099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.19968203s)
	I1115 10:38:24.224009  726986 kic.go:203] duration metric: took 4.199825189s to extract preloaded images to volume ...
	W1115 10:38:24.224140  726986 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:38:24.224289  726986 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:38:24.330797  726986 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-864099 --name auto-864099 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-864099 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-864099 --network auto-864099 --ip 192.168.76.2 --volume auto-864099:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:38:24.795146  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Running}}
	I1115 10:38:24.823175  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:38:24.849806  726986 cli_runner.go:164] Run: docker exec auto-864099 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:38:24.915409  726986 oci.go:144] the created container "auto-864099" has a running status.
	I1115 10:38:24.915435  726986 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa...
	I1115 10:38:25.286937  726986 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:38:25.316590  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:38:25.381838  726986 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:38:25.381859  726986 kic_runner.go:114] Args: [docker exec --privileged auto-864099 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:38:25.515222  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:38:25.547796  726986 machine.go:94] provisionDockerMachine start ...
	I1115 10:38:25.547901  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:25.591388  726986 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:25.591750  726986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 10:38:25.591761  726986 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:38:25.592548  726986 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:38:28.765392  726986 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-864099
	
	I1115 10:38:28.765458  726986 ubuntu.go:182] provisioning hostname "auto-864099"
	I1115 10:38:28.765563  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:28.791200  726986 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:28.791513  726986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 10:38:28.791525  726986 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-864099 && echo "auto-864099" | sudo tee /etc/hostname
	I1115 10:38:24.354693  725691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:38:24.365437  725691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:38:24.399243  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:38:24.399271  725691 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:38:24.562539  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:38:24.562568  725691 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:38:24.641852  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:38:24.641880  725691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:38:24.708054  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:38:24.708081  725691 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:38:24.737867  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:38:24.737890  725691 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:38:24.790359  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:38:24.790386  725691 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:38:24.828437  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:38:24.828467  725691 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:38:24.856533  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:38:24.856560  725691 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:38:24.899278  725691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:38:30.178435  725691 node_ready.go:49] node "default-k8s-diff-port-303164" is "Ready"
	I1115 10:38:30.178472  725691 node_ready.go:38] duration metric: took 5.905033549s for node "default-k8s-diff-port-303164" to be "Ready" ...
	I1115 10:38:30.178489  725691 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:38:30.178550  725691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:38:30.491978  725691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.137249381s)
	I1115 10:38:32.424693  725691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.059217891s)
	I1115 10:38:32.424813  725691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.525500603s)
	I1115 10:38:32.424987  725691 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.246419974s)
	I1115 10:38:32.425010  725691 api_server.go:72] duration metric: took 8.635194321s to wait for apiserver process to appear ...
	I1115 10:38:32.425017  725691 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:38:32.425034  725691 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:38:32.428254  725691 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-303164 addons enable metrics-server
	
	I1115 10:38:32.431176  725691 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:38:28.998151  726986 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-864099
	
	I1115 10:38:28.998245  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:29.025808  726986 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:29.026126  726986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 10:38:29.026148  726986 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-864099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-864099/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-864099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:38:29.217615  726986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:38:29.217700  726986 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:38:29.217753  726986 ubuntu.go:190] setting up certificates
	I1115 10:38:29.217782  726986 provision.go:84] configureAuth start
	I1115 10:38:29.217867  726986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-864099
	I1115 10:38:29.246780  726986 provision.go:143] copyHostCerts
	I1115 10:38:29.246843  726986 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:38:29.246854  726986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:38:29.246931  726986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:38:29.247013  726986 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:38:29.247018  726986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:38:29.247042  726986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:38:29.247094  726986 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:38:29.247099  726986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:38:29.247121  726986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:38:29.247167  726986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.auto-864099 san=[127.0.0.1 192.168.76.2 auto-864099 localhost minikube]
	I1115 10:38:29.873151  726986 provision.go:177] copyRemoteCerts
	I1115 10:38:29.873271  726986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:38:29.873345  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:29.891168  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.007521  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1115 10:38:30.030561  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1115 10:38:30.057067  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:38:30.084412  726986 provision.go:87] duration metric: took 866.588413ms to configureAuth
	I1115 10:38:30.084488  726986 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:38:30.084715  726986 config.go:182] Loaded profile config "auto-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:30.084880  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.117535  726986 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:30.117876  726986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 10:38:30.117894  726986 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:38:30.448801  726986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:38:30.448908  726986 machine.go:97] duration metric: took 4.901089868s to provisionDockerMachine
	I1115 10:38:30.448948  726986 client.go:176] duration metric: took 11.275115145s to LocalClient.Create
	I1115 10:38:30.449020  726986 start.go:167] duration metric: took 11.275231843s to libmachine.API.Create "auto-864099"
	I1115 10:38:30.449056  726986 start.go:293] postStartSetup for "auto-864099" (driver="docker")
	I1115 10:38:30.449079  726986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:38:30.449190  726986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:38:30.449254  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.472725  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.598713  726986 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:38:30.602911  726986 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:38:30.602954  726986 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:38:30.602966  726986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:38:30.603030  726986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:38:30.603129  726986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:38:30.603261  726986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:38:30.612792  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:38:30.637797  726986 start.go:296] duration metric: took 188.713607ms for postStartSetup
	I1115 10:38:30.638238  726986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-864099
	I1115 10:38:30.656826  726986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/config.json ...
	I1115 10:38:30.657157  726986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:38:30.657224  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.675461  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.783877  726986 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:38:30.792200  726986 start.go:128] duration metric: took 11.62208685s to createHost
	I1115 10:38:30.792225  726986 start.go:83] releasing machines lock for "auto-864099", held for 11.622208201s
	I1115 10:38:30.792312  726986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-864099
	I1115 10:38:30.820536  726986 ssh_runner.go:195] Run: cat /version.json
	I1115 10:38:30.820589  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.821008  726986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:38:30.821070  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.869024  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.870637  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.985444  726986 ssh_runner.go:195] Run: systemctl --version
	I1115 10:38:31.127466  726986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:38:31.193216  726986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:38:31.198079  726986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:38:31.198159  726986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:38:31.250283  726986 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:38:31.250359  726986 start.go:496] detecting cgroup driver to use...
	I1115 10:38:31.250413  726986 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:38:31.250489  726986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:38:31.286390  726986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:38:31.305822  726986 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:38:31.305943  726986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:38:31.331452  726986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:38:31.361465  726986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:38:31.592119  726986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:38:31.785123  726986 docker.go:234] disabling docker service ...
	I1115 10:38:31.785208  726986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:38:31.821227  726986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:38:31.844725  726986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:38:32.028187  726986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:38:32.246072  726986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:38:32.261709  726986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:38:32.290959  726986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:38:32.291071  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.305237  726986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:38:32.305308  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.325323  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.336329  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.345222  726986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:38:32.357890  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.370205  726986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.388140  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.396830  726986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:38:32.406428  726986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:38:32.422108  726986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:32.608099  726986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:38:32.768989  726986 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:38:32.769083  726986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:38:32.772895  726986 start.go:564] Will wait 60s for crictl version
	I1115 10:38:32.772966  726986 ssh_runner.go:195] Run: which crictl
	I1115 10:38:32.776744  726986 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:38:32.809255  726986 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:38:32.809360  726986 ssh_runner.go:195] Run: crio --version
	I1115 10:38:32.848450  726986 ssh_runner.go:195] Run: crio --version
	I1115 10:38:32.890904  726986 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:38:32.893935  726986 cli_runner.go:164] Run: docker network inspect auto-864099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:38:32.911081  726986 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:38:32.915190  726986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:32.931172  726986 kubeadm.go:884] updating cluster {Name:auto-864099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:38:32.931297  726986 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:38:32.931350  726986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:32.970343  726986 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:32.970363  726986 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:38:32.970416  726986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:33.003846  726986 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:33.003941  726986 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:38:33.003964  726986 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:38:33.004128  726986 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-864099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:38:33.004288  726986 ssh_runner.go:195] Run: crio config
	I1115 10:38:33.059603  726986 cni.go:84] Creating CNI manager for ""
	I1115 10:38:33.059626  726986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:38:33.059639  726986 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:38:33.059679  726986 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-864099 NodeName:auto-864099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:38:33.059861  726986 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-864099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:38:33.059940  726986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:38:33.067554  726986 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:38:33.067622  726986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:38:33.074895  726986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1115 10:38:33.087208  726986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:38:33.100705  726986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1115 10:38:33.113165  726986 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:38:33.116837  726986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:33.126287  726986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:33.282287  726986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:38:33.300158  726986 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099 for IP: 192.168.76.2
	I1115 10:38:33.300176  726986 certs.go:195] generating shared ca certs ...
	I1115 10:38:33.300192  726986 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:33.300346  726986 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:38:33.300388  726986 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:38:33.300403  726986 certs.go:257] generating profile certs ...
	I1115 10:38:33.300485  726986 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.key
	I1115 10:38:33.300496  726986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt with IP's: []
	I1115 10:38:32.433952  725691 addons.go:515] duration metric: took 8.643744284s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:38:32.438688  725691 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:38:32.440427  725691 api_server.go:141] control plane version: v1.34.1
	I1115 10:38:32.440456  725691 api_server.go:131] duration metric: took 15.431938ms to wait for apiserver health ...
	I1115 10:38:32.440466  725691 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:38:32.448511  725691 system_pods.go:59] 8 kube-system pods found
	I1115 10:38:32.448550  725691 system_pods.go:61] "coredns-66bc5c9577-97gv6" [b6f9a65e-75c6-4783-a879-1dfc86407862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:38:32.448561  725691 system_pods.go:61] "etcd-default-k8s-diff-port-303164" [4eb09433-dbaa-4753-aad2-8452321e45a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:38:32.448568  725691 system_pods.go:61] "kindnet-rph85" [30ef2b98-29f3-4a7e-a041-5a6bd98c92ef] Running
	I1115 10:38:32.448575  725691 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-303164" [04835349-0a82-4a74-9ed1-9032f3bfabef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:38:32.448589  725691 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-303164" [cfdb7882-766a-463b-a480-f6ee60cb718f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:38:32.448598  725691 system_pods.go:61] "kube-proxy-vmnnc" [e61077d0-3c58-4094-ad7e-436ec2f7fb3f] Running
	I1115 10:38:32.448606  725691 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-303164" [8c9a46a5-0f1d-496c-8b18-40544a608356] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:38:32.448610  725691 system_pods.go:61] "storage-provisioner" [344be432-6b85-4dea-a1a0-54ce0079d253] Running
	I1115 10:38:32.448617  725691 system_pods.go:74] duration metric: took 8.141458ms to wait for pod list to return data ...
	I1115 10:38:32.448628  725691 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:38:32.453243  725691 default_sa.go:45] found service account: "default"
	I1115 10:38:32.453273  725691 default_sa.go:55] duration metric: took 4.637581ms for default service account to be created ...
	I1115 10:38:32.453283  725691 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:38:32.549028  725691 system_pods.go:86] 8 kube-system pods found
	I1115 10:38:32.549115  725691 system_pods.go:89] "coredns-66bc5c9577-97gv6" [b6f9a65e-75c6-4783-a879-1dfc86407862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:38:32.549143  725691 system_pods.go:89] "etcd-default-k8s-diff-port-303164" [4eb09433-dbaa-4753-aad2-8452321e45a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:38:32.549184  725691 system_pods.go:89] "kindnet-rph85" [30ef2b98-29f3-4a7e-a041-5a6bd98c92ef] Running
	I1115 10:38:32.549216  725691 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303164" [04835349-0a82-4a74-9ed1-9032f3bfabef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:38:32.549243  725691 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303164" [cfdb7882-766a-463b-a480-f6ee60cb718f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:38:32.549276  725691 system_pods.go:89] "kube-proxy-vmnnc" [e61077d0-3c58-4094-ad7e-436ec2f7fb3f] Running
	I1115 10:38:32.549302  725691 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303164" [8c9a46a5-0f1d-496c-8b18-40544a608356] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:38:32.549326  725691 system_pods.go:89] "storage-provisioner" [344be432-6b85-4dea-a1a0-54ce0079d253] Running
	I1115 10:38:32.549366  725691 system_pods.go:126] duration metric: took 96.07623ms to wait for k8s-apps to be running ...
	I1115 10:38:32.549393  725691 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:38:32.549480  725691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:38:32.590116  725691 system_svc.go:56] duration metric: took 40.712962ms WaitForService to wait for kubelet
	I1115 10:38:32.590194  725691 kubeadm.go:587] duration metric: took 8.800375917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:38:32.590227  725691 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:38:32.594738  725691 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:38:32.594819  725691 node_conditions.go:123] node cpu capacity is 2
	I1115 10:38:32.594845  725691 node_conditions.go:105] duration metric: took 4.598952ms to run NodePressure ...
	I1115 10:38:32.594871  725691 start.go:242] waiting for startup goroutines ...
	I1115 10:38:32.594910  725691 start.go:247] waiting for cluster config update ...
	I1115 10:38:32.594937  725691 start.go:256] writing updated cluster config ...
	I1115 10:38:32.595296  725691 ssh_runner.go:195] Run: rm -f paused
	I1115 10:38:32.599419  725691 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:38:32.648045  725691 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-97gv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:38:34.410122  726986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt ...
	I1115 10:38:34.410154  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: {Name:mk72ef7aa2e4f5c07d0deafadc796b25165e3def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:34.410395  726986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.key ...
	I1115 10:38:34.410411  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.key: {Name:mke3768194deaa0353f27df285299cb4cf39a568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:34.410513  726986 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key.12308ab4
	I1115 10:38:34.410532  726986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt.12308ab4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 10:38:35.251490  726986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt.12308ab4 ...
	I1115 10:38:35.251521  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt.12308ab4: {Name:mke056e375cfbf899d519da388cae11f8b474a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:35.251735  726986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key.12308ab4 ...
	I1115 10:38:35.251751  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key.12308ab4: {Name:mk0a7818a031e606695d20b1279ae9829b9f1433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:35.251850  726986 certs.go:382] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt.12308ab4 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt
	I1115 10:38:35.251932  726986 certs.go:386] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key.12308ab4 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key
	I1115 10:38:35.251992  726986 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.key
	I1115 10:38:35.252005  726986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.crt with IP's: []
	I1115 10:38:35.907210  726986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.crt ...
	I1115 10:38:35.907241  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.crt: {Name:mkafd0da0cbfee412fbce111b94c9d89ae9707e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:35.907431  726986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.key ...
	I1115 10:38:35.907444  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.key: {Name:mk1c58da140cd795e8b2d8ac2422e39b840bc96f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:35.907622  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:38:35.907665  726986 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:38:35.907678  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:38:35.907704  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:38:35.907734  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:38:35.907763  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:38:35.907814  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:38:35.908526  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:38:35.927276  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:38:35.946192  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:38:35.964089  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:38:35.981293  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1115 10:38:35.999750  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:38:36.024106  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:38:36.042803  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:38:36.062599  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:38:36.080800  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:38:36.098690  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:38:36.116609  726986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:38:36.129769  726986 ssh_runner.go:195] Run: openssl version
	I1115 10:38:36.135963  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:38:36.144257  726986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:38:36.147813  726986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:38:36.147877  726986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:38:36.190385  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:38:36.198915  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:38:36.207725  726986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:38:36.211465  726986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:38:36.211529  726986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:38:36.252881  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:38:36.261237  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:38:36.268993  726986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:36.272563  726986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:36.272622  726986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:36.313880  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:38:36.324093  726986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:38:36.339423  726986 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:38:36.339494  726986 kubeadm.go:401] StartCluster: {Name:auto-864099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:38:36.339577  726986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:38:36.339661  726986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:38:36.403085  726986 cri.go:89] found id: ""
	I1115 10:38:36.403187  726986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:38:36.415069  726986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:38:36.423222  726986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:38:36.423296  726986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:38:36.435194  726986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:38:36.435213  726986 kubeadm.go:158] found existing configuration files:
	
	I1115 10:38:36.435272  726986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:38:36.446823  726986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:38:36.446892  726986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:38:36.455564  726986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:38:36.464362  726986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:38:36.464440  726986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:38:36.472500  726986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:38:36.481344  726986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:38:36.481449  726986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:38:36.489130  726986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:38:36.497131  726986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:38:36.497199  726986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:38:36.505382  726986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:38:36.547697  726986 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:38:36.548052  726986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:38:36.572808  726986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:38:36.572887  726986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:38:36.572929  726986 kubeadm.go:319] OS: Linux
	I1115 10:38:36.572981  726986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:38:36.573037  726986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:38:36.573089  726986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:38:36.573144  726986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:38:36.573199  726986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:38:36.573254  726986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:38:36.573306  726986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:38:36.573359  726986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:38:36.573412  726986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:38:36.661994  726986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:38:36.662112  726986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:38:36.662214  726986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:38:36.672937  726986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:38:36.678465  726986 out.go:252]   - Generating certificates and keys ...
	I1115 10:38:36.678558  726986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:38:36.678633  726986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:38:37.018019  726986 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:38:37.105295  726986 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:38:37.725158  726986 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:38:38.729483  726986 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1115 10:38:34.660658  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:36.666934  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:39.200967  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:38.892411  726986 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:38:38.892624  726986 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-864099 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:38:39.669238  726986 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:38:39.669381  726986 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-864099 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:38:40.034451  726986 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:38:40.918211  726986 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:38:41.726276  726986 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:38:41.726405  726986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:38:42.280158  726986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:38:43.196635  726986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:38:43.692552  726986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	W1115 10:38:41.654614  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:44.155005  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:44.480869  726986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:38:44.846901  726986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:38:44.847080  726986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:38:44.851980  726986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:38:44.855450  726986 out.go:252]   - Booting up control plane ...
	I1115 10:38:44.855831  726986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:38:44.855978  726986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:38:44.859621  726986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:38:44.901384  726986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:38:44.901498  726986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:38:44.912302  726986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:38:44.912939  726986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:38:44.913018  726986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:38:45.130060  726986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:38:45.130283  726986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:38:47.133965  726986 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000819514s
	I1115 10:38:47.134146  726986 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:38:47.134275  726986 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 10:38:47.134412  726986 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:38:47.134529  726986 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1115 10:38:46.162464  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:48.165890  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:51.978574  726986 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.844253712s
	I1115 10:38:53.678172  726986 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.544437788s
	W1115 10:38:50.170285  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:52.653105  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:54.636480  726986 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.502541168s
	I1115 10:38:54.664732  726986 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:38:54.694533  726986 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:38:54.709174  726986 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:38:54.709406  726986 kubeadm.go:319] [mark-control-plane] Marking the node auto-864099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:38:54.726149  726986 kubeadm.go:319] [bootstrap-token] Using token: nl7m3s.d5tp1el1kgavz79t
	I1115 10:38:54.729037  726986 out.go:252]   - Configuring RBAC rules ...
	I1115 10:38:54.729160  726986 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:38:54.746104  726986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:38:54.759545  726986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:38:54.763853  726986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:38:54.771599  726986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:38:54.777904  726986 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:38:55.043989  726986 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:38:55.516434  726986 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:38:56.043715  726986 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:38:56.045110  726986 kubeadm.go:319] 
	I1115 10:38:56.045227  726986 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:38:56.045241  726986 kubeadm.go:319] 
	I1115 10:38:56.045319  726986 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:38:56.045324  726986 kubeadm.go:319] 
	I1115 10:38:56.045349  726986 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:38:56.045408  726986 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:38:56.045470  726986 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:38:56.045476  726986 kubeadm.go:319] 
	I1115 10:38:56.045531  726986 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:38:56.045536  726986 kubeadm.go:319] 
	I1115 10:38:56.045584  726986 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:38:56.045618  726986 kubeadm.go:319] 
	I1115 10:38:56.045673  726986 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:38:56.045749  726986 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:38:56.045817  726986 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:38:56.045822  726986 kubeadm.go:319] 
	I1115 10:38:56.045907  726986 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:38:56.045984  726986 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:38:56.045989  726986 kubeadm.go:319] 
	I1115 10:38:56.046073  726986 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nl7m3s.d5tp1el1kgavz79t \
	I1115 10:38:56.046177  726986 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 10:38:56.046198  726986 kubeadm.go:319] 	--control-plane 
	I1115 10:38:56.046202  726986 kubeadm.go:319] 
	I1115 10:38:56.046288  726986 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:38:56.046293  726986 kubeadm.go:319] 
	I1115 10:38:56.046375  726986 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nl7m3s.d5tp1el1kgavz79t \
	I1115 10:38:56.046478  726986 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 10:38:56.050734  726986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:38:56.050976  726986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:38:56.051092  726986 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:38:56.051116  726986 cni.go:84] Creating CNI manager for ""
	I1115 10:38:56.051124  726986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:38:56.056112  726986 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:38:56.058944  726986 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:38:56.064567  726986 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:38:56.064640  726986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:38:56.079735  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:38:56.417521  726986 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:38:56.417679  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:56.417767  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-864099 minikube.k8s.io/updated_at=2025_11_15T10_38_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=auto-864099 minikube.k8s.io/primary=true
	I1115 10:38:56.434615  726986 ops.go:34] apiserver oom_adj: -16
	I1115 10:38:56.576916  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:57.077559  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:57.577839  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:58.077038  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:58.577414  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1115 10:38:54.653500  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:57.156170  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:59.156437  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:59.077585  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:59.577760  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:39:00.104615  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:39:00.577250  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:39:00.727780  726986 kubeadm.go:1114] duration metric: took 4.310134337s to wait for elevateKubeSystemPrivileges
	I1115 10:39:00.727809  726986 kubeadm.go:403] duration metric: took 24.388318783s to StartCluster
	I1115 10:39:00.727826  726986 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:39:00.727892  726986 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:39:00.728906  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:39:00.729149  726986 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:39:00.729252  726986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:39:00.729560  726986 config.go:182] Loaded profile config "auto-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:39:00.729643  726986 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:39:00.729725  726986 addons.go:70] Setting storage-provisioner=true in profile "auto-864099"
	I1115 10:39:00.729747  726986 addons.go:239] Setting addon storage-provisioner=true in "auto-864099"
	I1115 10:39:00.729778  726986 host.go:66] Checking if "auto-864099" exists ...
	I1115 10:39:00.730462  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:39:00.730710  726986 addons.go:70] Setting default-storageclass=true in profile "auto-864099"
	I1115 10:39:00.730735  726986 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-864099"
	I1115 10:39:00.730986  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:39:00.735384  726986 out.go:179] * Verifying Kubernetes components...
	I1115 10:39:00.738434  726986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:39:00.798355  726986 addons.go:239] Setting addon default-storageclass=true in "auto-864099"
	I1115 10:39:00.798402  726986 host.go:66] Checking if "auto-864099" exists ...
	I1115 10:39:00.798824  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:39:00.799451  726986 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:39:00.802665  726986 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:39:00.802685  726986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:39:00.802754  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:39:00.836445  726986 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:39:00.836473  726986 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:39:00.836542  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:39:00.845858  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:39:00.865217  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:39:01.222549  726986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:39:01.222562  726986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:39:01.223561  726986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:39:01.363757  726986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:39:01.791160  726986 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1115 10:39:01.793052  726986 node_ready.go:35] waiting up to 15m0s for node "auto-864099" to be "Ready" ...
	I1115 10:39:02.091798  726986 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 10:39:02.094843  726986 addons.go:515] duration metric: took 1.365174899s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 10:39:02.296895  726986 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-864099" context rescaled to 1 replicas
	W1115 10:39:03.796135  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:01.158465  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:39:03.653471  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:39:04.152932  725691 pod_ready.go:94] pod "coredns-66bc5c9577-97gv6" is "Ready"
	I1115 10:39:04.152962  725691 pod_ready.go:86] duration metric: took 31.504889197s for pod "coredns-66bc5c9577-97gv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.155653  725691 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.160189  725691 pod_ready.go:94] pod "etcd-default-k8s-diff-port-303164" is "Ready"
	I1115 10:39:04.160215  725691 pod_ready.go:86] duration metric: took 4.534265ms for pod "etcd-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.162379  725691 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.167078  725691 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-303164" is "Ready"
	I1115 10:39:04.167109  725691 pod_ready.go:86] duration metric: took 4.701718ms for pod "kube-apiserver-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.169747  725691 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.351806  725691 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-303164" is "Ready"
	I1115 10:39:04.351836  725691 pod_ready.go:86] duration metric: took 182.060358ms for pod "kube-controller-manager-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.551929  725691 pod_ready.go:83] waiting for pod "kube-proxy-vmnnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.951065  725691 pod_ready.go:94] pod "kube-proxy-vmnnc" is "Ready"
	I1115 10:39:04.951095  725691 pod_ready.go:86] duration metric: took 399.141119ms for pod "kube-proxy-vmnnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:05.151042  725691 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:05.551956  725691 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-303164" is "Ready"
	I1115 10:39:05.551984  725691 pod_ready.go:86] duration metric: took 400.909495ms for pod "kube-scheduler-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:05.551998  725691 pod_ready.go:40] duration metric: took 32.952551124s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:39:05.628724  725691 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:39:05.631691  725691 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-303164" cluster and "default" namespace by default
	W1115 10:39:05.802197  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:08.296682  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:10.796088  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:13.296605  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:15.796558  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:18.295786  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 15 10:38:59 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:38:59.107961127Z" level=info msg="Removed container 378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8/dashboard-metrics-scraper" id=18be7d6a-36a7-4bc6-9e0c-883eb1b430ac name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:39:01 default-k8s-diff-port-303164 conmon[1147]: conmon 5cb75eb11bbd0b60da9e <ninfo>: container 1157 exited with status 1
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.142145601Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9a9da681-1dd7-4c57-8859-8ae059f4aa25 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.147090357Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b1a59432-e179-453f-9d53-7075b951cdce name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.14867469Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=96ff6042-7fc2-49bc-8b75-b6132fe6c65b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.14879403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.163755671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.163981765Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8e7cc255d600cba8962f7e6b90dc0ccb4b8fa0a12cf1be3c72e20d9788d3c687/merged/etc/passwd: no such file or directory"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.164013083Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8e7cc255d600cba8962f7e6b90dc0ccb4b8fa0a12cf1be3c72e20d9788d3c687/merged/etc/group: no such file or directory"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.164382427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.18879369Z" level=info msg="Created container 7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce: kube-system/storage-provisioner/storage-provisioner" id=96ff6042-7fc2-49bc-8b75-b6132fe6c65b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.190064334Z" level=info msg="Starting container: 7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce" id=4ec1e593-050d-4117-94ef-3c60e37b177d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.19179414Z" level=info msg="Started container" PID=1638 containerID=7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce description=kube-system/storage-provisioner/storage-provisioner id=4ec1e593-050d-4117-94ef-3c60e37b177d name=/runtime.v1.RuntimeService/StartContainer sandboxID=15e83ebe97d8004a4abacd8eae8bc0cb217e01894a903c6442ec2ce84dbdde08
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.710968077Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.714823427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.714987501Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.71508079Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.718509002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.718544472Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.718571097Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.721714082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.721745638Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.721769571Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.724756458Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.724790074Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7566f2a7beef6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   15e83ebe97d80       storage-provisioner                                    kube-system
	cf76876a687d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   18b8546920675       dashboard-metrics-scraper-6ffb444bf9-lhct8             kubernetes-dashboard
	2e643fed5aa28       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   436c86a6c86ce       kubernetes-dashboard-855c9754f9-4mmm8                  kubernetes-dashboard
	68cbd4128a0f6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   3bb342a56cfa5       busybox                                                default
	eb40357445059       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   09609883975c6       coredns-66bc5c9577-97gv6                               kube-system
	acc8eca44366a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   69ee271d8f824       kube-proxy-vmnnc                                       kube-system
	5cb75eb11bbd0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   15e83ebe97d80       storage-provisioner                                    kube-system
	f55f11e9f4617       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   89d73a963f244       kindnet-rph85                                          kube-system
	2c910a1bc9819       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   7f9235846f2c4       kube-scheduler-default-k8s-diff-port-303164            kube-system
	0530aabdcbb5a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   6d8f2f626aa27       kube-apiserver-default-k8s-diff-port-303164            kube-system
	a98fb964f4025       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   d4ac7937a63f0       etcd-default-k8s-diff-port-303164                      kube-system
	6b4d8bfc8b089       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   e2d720b979ca4       kube-controller-manager-default-k8s-diff-port-303164   kube-system
	
	
	==> coredns [eb40357445059cc14c5f7b7baf983424338a1f3a04ec773e4e548001a06069e0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47176 - 28927 "HINFO IN 2312371740486467996.6041860029921088659. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021668448s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-303164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-303164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=default-k8s-diff-port-303164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_36_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:36:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-303164
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:39:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:39:11 +0000   Sat, 15 Nov 2025 10:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:39:11 +0000   Sat, 15 Nov 2025 10:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:39:11 +0000   Sat, 15 Nov 2025 10:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:39:11 +0000   Sat, 15 Nov 2025 10:37:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-303164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                4f8ed4eb-3c24-41b5-a3a9-de151f112693
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-97gv6                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-default-k8s-diff-port-303164                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-rph85                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-303164             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-303164    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-vmnnc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-default-k8s-diff-port-303164             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lhct8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4mmm8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m16s              kube-proxy       
	  Normal   Starting                 48s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m23s              kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m23s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s              kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m23s              kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m23s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m18s              node-controller  Node default-k8s-diff-port-303164 event: Registered Node default-k8s-diff-port-303164 in Controller
	  Normal   NodeReady                96s                kubelet          Node default-k8s-diff-port-303164 status is now: NodeReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                node-controller  Node default-k8s-diff-port-303164 event: Registered Node default-k8s-diff-port-303164 in Controller
	
	
	==> dmesg <==
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	[Nov15 10:36] overlayfs: idmapped layers are currently not supported
	[Nov15 10:37] overlayfs: idmapped layers are currently not supported
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	[ +20.770485] overlayfs: idmapped layers are currently not supported
	[ +24.092912] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a98fb964f4025f8c4a4027fd4b096cc84c2f581727a83f5729d88f17aa2c2b22] <==
	{"level":"warn","ts":"2025-11-15T10:38:28.361262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.381003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.399140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.420778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.435671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.448340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.464676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.486384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.498820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.521683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.532319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.557806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.573726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.611460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.620673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.657088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.671761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.706866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.725575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.745919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.770072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.800875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.830042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.838220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.920235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41422","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:39:21 up  5:21,  0 user,  load average: 3.58, 3.69, 3.12
	Linux default-k8s-diff-port-303164 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f55f11e9f461788084a143dcfa22c6414008456df58d4f0cfdfcfdea76b378d2] <==
	I1115 10:38:31.473889       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:38:31.474124       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:38:31.474239       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:38:31.474250       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:38:31.474260       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:38:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:38:31.718311       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:38:31.718338       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:38:31.718347       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:38:31.718663       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:39:01.711174       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:39:01.718975       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:39:01.719101       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:39:01.719214       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:39:03.118490       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:39:03.118522       1 metrics.go:72] Registering metrics
	I1115 10:39:03.118571       1 controller.go:711] "Syncing nftables rules"
	I1115 10:39:11.710625       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:39:11.710684       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0530aabdcbb5a21f8ba0a88ad2e2bf5546365f9556577f075c171b1c817f1960] <==
	I1115 10:38:30.222774       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:38:30.222958       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:38:30.237383       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:38:30.237838       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:38:30.237865       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:38:30.237874       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:38:30.237881       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:38:30.238025       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:38:30.238071       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1115 10:38:30.247595       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:38:30.300412       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:38:30.303150       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:38:30.303178       1 policy_source.go:240] refreshing policies
	I1115 10:38:30.374511       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:38:30.759266       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:38:30.946146       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:38:31.738797       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:38:31.937533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:38:32.028786       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:38:32.081587       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:38:32.338993       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.120.165"}
	I1115 10:38:32.405281       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.28.103"}
	I1115 10:38:34.638529       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:38:34.984288       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:38:35.216984       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6b4d8bfc8b089aa1a7d9c75dabaec5b65337237d2c6f29d3f00908a4c3dcd6bf] <==
	I1115 10:38:34.644276       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:38:34.648331       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:38:34.651160       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:38:34.651288       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:38:34.652485       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:38:34.657817       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:38:34.659980       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:38:34.663168       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:38:34.667408       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:38:34.669652       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:38:34.674015       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:38:34.675751       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:38:34.677878       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:38:34.678075       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:38:34.678520       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-303164"
	I1115 10:38:34.678613       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:38:34.677979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:38:34.677946       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:38:34.677959       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:38:34.677969       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:38:34.680568       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:38:34.685888       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:38:34.701688       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:38:34.701764       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:38:34.701797       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [acc8eca44366ae83668276140d7ec0a035ccf8963b6889fe220fec65c5943fe4] <==
	I1115 10:38:32.212584       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:38:32.534759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:38:32.634996       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:38:32.635078       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:38:32.635166       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:38:32.688135       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:38:32.688254       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:38:32.711665       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:38:32.712061       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:38:32.712228       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:38:32.713577       1 config.go:200] "Starting service config controller"
	I1115 10:38:32.713817       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:38:32.713881       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:38:32.713935       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:38:32.713973       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:38:32.713999       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:38:32.714844       1 config.go:309] "Starting node config controller"
	I1115 10:38:32.714894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:38:32.714924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:38:32.814793       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:38:32.814832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:38:32.814874       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c910a1bc98190b14a76fa88f7d5067fd7b09b18629ae3b1acf0e8f9394dac1f] <==
	I1115 10:38:26.890625       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:38:30.103128       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:38:30.103251       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:38:30.103308       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:38:30.103346       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:38:30.243263       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:38:30.243383       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:38:30.252154       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:38:30.252369       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:38:30.254755       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:38:30.254848       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:38:30.356278       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:38:35 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:35.198214     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flvw\" (UniqueName: \"kubernetes.io/projected/6d9ff063-ba22-4965-aa68-699ee10b68f9-kube-api-access-2flvw\") pod \"dashboard-metrics-scraper-6ffb444bf9-lhct8\" (UID: \"6d9ff063-ba22-4965-aa68-699ee10b68f9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8"
	Nov 15 10:38:35 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:35.198552     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6d9ff063-ba22-4965-aa68-699ee10b68f9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-lhct8\" (UID: \"6d9ff063-ba22-4965-aa68-699ee10b68f9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8"
	Nov 15 10:38:35 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:35.198738     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmwb\" (UniqueName: \"kubernetes.io/projected/794a2f16-96be-4a3a-822c-5499be15dc22-kube-api-access-rjmwb\") pod \"kubernetes-dashboard-855c9754f9-4mmm8\" (UID: \"794a2f16-96be-4a3a-822c-5499be15dc22\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mmm8"
	Nov 15 10:38:35 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:35.198883     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/794a2f16-96be-4a3a-822c-5499be15dc22-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4mmm8\" (UID: \"794a2f16-96be-4a3a-822c-5499be15dc22\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mmm8"
	Nov 15 10:38:36 default-k8s-diff-port-303164 kubelet[779]: W1115 10:38:36.365943     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/crio-18b8546920675846bcd65769a3b292beeaceb064e4e6319525af1889c3f6958e WatchSource:0}: Error finding container 18b8546920675846bcd65769a3b292beeaceb064e4e6319525af1889c3f6958e: Status 404 returned error can't find the container with id 18b8546920675846bcd65769a3b292beeaceb064e4e6319525af1889c3f6958e
	Nov 15 10:38:36 default-k8s-diff-port-303164 kubelet[779]: W1115 10:38:36.385411     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/crio-436c86a6c86ce5e8cebea3030ec623a90322ba61a28497a4b12936aae806638f WatchSource:0}: Error finding container 436c86a6c86ce5e8cebea3030ec623a90322ba61a28497a4b12936aae806638f: Status 404 returned error can't find the container with id 436c86a6c86ce5e8cebea3030ec623a90322ba61a28497a4b12936aae806638f
	Nov 15 10:38:43 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:43.023243     779 scope.go:117] "RemoveContainer" containerID="cfcc8877bc9073350007f14fba989d5e0e8aa8164ff56e32288faf68ece14947"
	Nov 15 10:38:44 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:44.027228     779 scope.go:117] "RemoveContainer" containerID="cfcc8877bc9073350007f14fba989d5e0e8aa8164ff56e32288faf68ece14947"
	Nov 15 10:38:44 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:44.027521     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:44 default-k8s-diff-port-303164 kubelet[779]: E1115 10:38:44.027669     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:38:45 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:45.032465     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:45 default-k8s-diff-port-303164 kubelet[779]: E1115 10:38:45.032640     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:38:46 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:46.308549     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:46 default-k8s-diff-port-303164 kubelet[779]: E1115 10:38:46.308724     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:38:58 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:58.854820     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:59 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:59.072889     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:59 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:59.073113     779 scope.go:117] "RemoveContainer" containerID="cf76876a687d20a38bd839d035595084bd6d94c1f4dfe4203497cd9f62dfc593"
	Nov 15 10:38:59 default-k8s-diff-port-303164 kubelet[779]: E1115 10:38:59.073304     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:38:59 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:59.106273     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mmm8" podStartSLOduration=10.932090506 podStartE2EDuration="24.106254582s" podCreationTimestamp="2025-11-15 10:38:35 +0000 UTC" firstStartedPulling="2025-11-15 10:38:36.393488182 +0000 UTC m=+13.852619005" lastFinishedPulling="2025-11-15 10:38:49.567652258 +0000 UTC m=+27.026783081" observedRunningTime="2025-11-15 10:38:50.079104151 +0000 UTC m=+27.538234982" watchObservedRunningTime="2025-11-15 10:38:59.106254582 +0000 UTC m=+36.565385413"
	Nov 15 10:39:02 default-k8s-diff-port-303164 kubelet[779]: I1115 10:39:02.138090     779 scope.go:117] "RemoveContainer" containerID="5cb75eb11bbd0b60da9e1d96609a3e36b9d59a6bbe55060fc6e322be02ff99ed"
	Nov 15 10:39:06 default-k8s-diff-port-303164 kubelet[779]: I1115 10:39:06.308578     779 scope.go:117] "RemoveContainer" containerID="cf76876a687d20a38bd839d035595084bd6d94c1f4dfe4203497cd9f62dfc593"
	Nov 15 10:39:06 default-k8s-diff-port-303164 kubelet[779]: E1115 10:39:06.309261     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:39:18 default-k8s-diff-port-303164 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:39:18 default-k8s-diff-port-303164 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:39:18 default-k8s-diff-port-303164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2e643fed5aa284e7891d963b79c953d7c3d1f44044faa4dd0248eb955adca97f] <==
	2025/11/15 10:38:49 Using namespace: kubernetes-dashboard
	2025/11/15 10:38:49 Using in-cluster config to connect to apiserver
	2025/11/15 10:38:49 Using secret token for csrf signing
	2025/11/15 10:38:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:38:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:38:49 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:38:49 Generating JWE encryption key
	2025/11/15 10:38:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:38:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:38:50 Initializing JWE encryption key from synchronized object
	2025/11/15 10:38:50 Creating in-cluster Sidecar client
	2025/11/15 10:38:50 Serving insecurely on HTTP port: 9090
	2025/11/15 10:38:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:39:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:38:49 Starting overwatch
	
	
	==> storage-provisioner [5cb75eb11bbd0b60da9e1d96609a3e36b9d59a6bbe55060fc6e322be02ff99ed] <==
	I1115 10:38:31.847853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:39:01.850166       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce] <==
	I1115 10:39:02.210575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:39:02.223182       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:39:02.223242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:39:02.225442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:05.681567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:09.942144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:13.541245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:16.595115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:19.617448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:19.622328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:39:19.622463       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:39:19.622629       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303164_79107faa-ad3f-4883-8971-7cbfdae8f2f2!
	I1115 10:39:19.623501       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b6d7f6c5-5cd5-4e38-9b83-ceab25b500ef", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-303164_79107faa-ad3f-4883-8971-7cbfdae8f2f2 became leader
	W1115 10:39:19.629698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:19.635921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:39:19.722787       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303164_79107faa-ad3f-4883-8971-7cbfdae8f2f2!
	W1115 10:39:21.639655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:21.644464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164: exit status 2 (376.549034ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-303164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-303164
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-303164:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec",
	        "Created": "2025-11-15T10:36:29.397887261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 725907,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:38:14.697318691Z",
	            "FinishedAt": "2025-11-15T10:38:13.64804261Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/hosts",
	        "LogPath": "/var/lib/docker/containers/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec-json.log",
	        "Name": "/default-k8s-diff-port-303164",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-303164:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-303164",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec",
	                "LowerDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce-init/diff:/var/lib/docker/overlay2/0bbfd59c28fd370ab20ecad47a379b33dd5ec396c9aaa726a6f6c98d7b356a32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d22248264104672f690d9cb64da29d0b8bfbdd6b6442e55c2d31132564aefcce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-303164",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-303164/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-303164",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-303164",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-303164",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fbf5a14ded46f8708b285538f00084af13ce4c5533afa43904c02e4c38a75618",
	            "SandboxKey": "/var/run/docker/netns/fbf5a14ded46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-303164": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d1:16:37:0d:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04f2761baa0d9af0d0867b1125f2a84414f21796e96d64d92b5c112e2b1380e0",
	                    "EndpointID": "6947c36c88fd14f2bc10f156861231b0edc6748e197c3681401c839dba6851ab",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-303164",
	                        "41c6c089346a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164: exit status 2 (381.853386ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-303164 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-303164 logs -n 25: (1.288816963s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-907610 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │                     │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p no-preload-907610                                                                                                                                                                                                                          │ no-preload-907610            │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ delete  │ -p disable-driver-mounts-167523                                                                                                                                                                                                               │ disable-driver-mounts-167523 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:36 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:36 UTC │ 15 Nov 25 10:37 UTC │
	│ image   │ embed-certs-531596 image list --format=json                                                                                                                                                                                                   │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ pause   │ -p embed-certs-531596 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ delete  │ -p embed-certs-531596                                                                                                                                                                                                                         │ embed-certs-531596           │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-395885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ stop    │ -p newest-cni-395885 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ addons  │ enable dashboard -p newest-cni-395885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:37 UTC │
	│ start   │ -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │ 15 Nov 25 10:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-303164 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:37 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-303164 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ image   │ newest-cni-395885 image list --format=json                                                                                                                                                                                                    │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ pause   │ -p newest-cni-395885 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-303164 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ start   │ -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:39 UTC │
	│ delete  │ -p newest-cni-395885                                                                                                                                                                                                                          │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ delete  │ -p newest-cni-395885                                                                                                                                                                                                                          │ newest-cni-395885            │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │ 15 Nov 25 10:38 UTC │
	│ start   │ -p auto-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-864099                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:38 UTC │                     │
	│ image   │ default-k8s-diff-port-303164 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:39 UTC │ 15 Nov 25 10:39 UTC │
	│ pause   │ -p default-k8s-diff-port-303164 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-303164 │ jenkins │ v1.37.0 │ 15 Nov 25 10:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:38:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:38:18.845427  726986 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:38:18.845580  726986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:38:18.845592  726986 out.go:374] Setting ErrFile to fd 2...
	I1115 10:38:18.845645  726986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:38:18.846079  726986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:38:18.847025  726986 out.go:368] Setting JSON to false
	I1115 10:38:18.847990  726986 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19250,"bootTime":1763183849,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:38:18.848086  726986 start.go:143] virtualization:  
	I1115 10:38:18.852023  726986 out.go:179] * [auto-864099] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:38:18.856380  726986 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:38:18.856506  726986 notify.go:221] Checking for updates...
	I1115 10:38:18.862849  726986 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:38:18.865994  726986 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:38:18.869213  726986 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:38:18.872356  726986 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:38:18.875433  726986 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:38:18.879025  726986 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:18.879194  726986 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:38:18.930121  726986 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:38:18.930260  726986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:38:19.030920  726986 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 10:38:19.021085175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:38:19.031019  726986 docker.go:319] overlay module found
	I1115 10:38:19.034321  726986 out.go:179] * Using the docker driver based on user configuration
	I1115 10:38:19.037347  726986 start.go:309] selected driver: docker
	I1115 10:38:19.037370  726986 start.go:930] validating driver "docker" against <nil>
	I1115 10:38:19.037384  726986 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:38:19.038133  726986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:38:19.125560  726986 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 10:38:19.115382642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:38:19.125745  726986 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:38:19.125994  726986 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:38:19.128976  726986 out.go:179] * Using Docker driver with root privileges
	I1115 10:38:19.131835  726986 cni.go:84] Creating CNI manager for ""
	I1115 10:38:19.131898  726986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:38:19.131918  726986 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:38:19.132009  726986 start.go:353] cluster config:
	{Name:auto-864099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1115 10:38:19.135175  726986 out.go:179] * Starting "auto-864099" primary control-plane node in "auto-864099" cluster
	I1115 10:38:19.138058  726986 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:38:19.141042  726986 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:38:19.143887  726986 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:38:19.143935  726986 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1115 10:38:19.143950  726986 cache.go:65] Caching tarball of preloaded images
	I1115 10:38:19.143960  726986 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:38:19.144037  726986 preload.go:238] Found /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1115 10:38:19.144047  726986 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:38:19.144153  726986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/config.json ...
	I1115 10:38:19.144170  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/config.json: {Name:mk7fb890c383a78db32389d094b5012c030c4f5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:19.169856  726986 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:38:19.169874  726986 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:38:19.169886  726986 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:38:19.169908  726986 start.go:360] acquireMachinesLock for auto-864099: {Name:mk2d9e06aa8943c9d5c5df210e24fc9695013696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:38:19.170007  726986 start.go:364] duration metric: took 84.183µs to acquireMachinesLock for "auto-864099"
	I1115 10:38:19.170031  726986 start.go:93] Provisioning new machine with config: &{Name:auto-864099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:38:19.170098  726986 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:38:14.657509  725691 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-303164" ...
	I1115 10:38:14.657697  725691 cli_runner.go:164] Run: docker start default-k8s-diff-port-303164
	I1115 10:38:14.969509  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:15.000397  725691 kic.go:430] container "default-k8s-diff-port-303164" state is running.
	I1115 10:38:15.000882  725691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:38:15.031629  725691 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/config.json ...
	I1115 10:38:15.031896  725691 machine.go:94] provisionDockerMachine start ...
	I1115 10:38:15.031962  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:15.056129  725691 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:15.056466  725691 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 10:38:15.056477  725691 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:38:15.057301  725691 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:38:18.226449  725691 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303164
	
	I1115 10:38:18.226527  725691 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-303164"
	I1115 10:38:18.226647  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:18.253339  725691 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:18.253683  725691 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 10:38:18.253702  725691 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-303164 && echo "default-k8s-diff-port-303164" | sudo tee /etc/hostname
	I1115 10:38:18.428713  725691 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-303164
	
	I1115 10:38:18.428790  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:18.454959  725691 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:18.455253  725691 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 10:38:18.455270  725691 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-303164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-303164/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-303164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:38:18.621446  725691 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:38:18.621475  725691 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:38:18.621501  725691 ubuntu.go:190] setting up certificates
	I1115 10:38:18.621509  725691 provision.go:84] configureAuth start
	I1115 10:38:18.621568  725691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:38:18.644961  725691 provision.go:143] copyHostCerts
	I1115 10:38:18.645012  725691 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:38:18.645024  725691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:38:18.645104  725691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:38:18.645195  725691 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:38:18.645201  725691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:38:18.645227  725691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:38:18.645289  725691 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:38:18.645295  725691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:38:18.645324  725691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:38:18.645433  725691 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-303164 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-303164 localhost minikube]
	I1115 10:38:18.745912  725691 provision.go:177] copyRemoteCerts
	I1115 10:38:18.746178  725691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:38:18.746249  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:18.781112  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:18.894389  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:38:18.916058  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1115 10:38:18.948978  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:38:18.970273  725691 provision.go:87] duration metric: took 348.739486ms to configureAuth
	I1115 10:38:18.970301  725691 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:38:18.970484  725691 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:18.970599  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:18.995026  725691 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:18.995351  725691 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33824 <nil> <nil>}
	I1115 10:38:18.995376  725691 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:38:19.403809  725691 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:38:19.403834  725691 machine.go:97] duration metric: took 4.371927008s to provisionDockerMachine
	I1115 10:38:19.403845  725691 start.go:293] postStartSetup for "default-k8s-diff-port-303164" (driver="docker")
	I1115 10:38:19.403856  725691 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:38:19.403917  725691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:38:19.403973  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:19.428897  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:19.538034  725691 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:38:19.544732  725691 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:38:19.544765  725691 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:38:19.544776  725691 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:38:19.544852  725691 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:38:19.544984  725691 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:38:19.545148  725691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:38:19.553235  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:38:19.586638  725691 start.go:296] duration metric: took 182.77711ms for postStartSetup
	I1115 10:38:19.586773  725691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:38:19.586842  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:19.606409  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:19.727919  725691 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:38:19.734556  725691 fix.go:56] duration metric: took 5.104712086s for fixHost
	I1115 10:38:19.734583  725691 start.go:83] releasing machines lock for "default-k8s-diff-port-303164", held for 5.104763113s
	I1115 10:38:19.734685  725691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-303164
	I1115 10:38:19.752152  725691 ssh_runner.go:195] Run: cat /version.json
	I1115 10:38:19.752184  725691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:38:19.752213  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:19.752246  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:19.772136  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:19.794296  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:19.991746  725691 ssh_runner.go:195] Run: systemctl --version
	I1115 10:38:19.998826  725691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:38:20.065816  725691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:38:20.075085  725691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:38:20.075167  725691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:38:20.091094  725691 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:38:20.091119  725691 start.go:496] detecting cgroup driver to use...
	I1115 10:38:20.091243  725691 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:38:20.091347  725691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:38:20.120227  725691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:38:20.148059  725691 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:38:20.148139  725691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:38:20.171307  725691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:38:20.201708  725691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:38:20.387437  725691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:38:20.541932  725691 docker.go:234] disabling docker service ...
	I1115 10:38:20.542013  725691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:38:20.559404  725691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:38:20.573458  725691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:38:20.731284  725691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:38:20.888792  725691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:38:20.904325  725691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:38:20.921752  725691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:38:20.921885  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.931333  725691 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:38:20.931447  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.940667  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.949473  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.958801  725691 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:38:20.967022  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.976296  725691 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.985324  725691 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:20.994603  725691 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:38:21.003989  725691 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:38:21.013407  725691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:21.166893  725691 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:38:21.975651  725691 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:38:21.975762  725691 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:38:21.980007  725691 start.go:564] Will wait 60s for crictl version
	I1115 10:38:21.980101  725691 ssh_runner.go:195] Run: which crictl
	I1115 10:38:21.983856  725691 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:38:22.020954  725691 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:38:22.021073  725691 ssh_runner.go:195] Run: crio --version
	I1115 10:38:22.063664  725691 ssh_runner.go:195] Run: crio --version
	I1115 10:38:22.105255  725691 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:38:22.108378  725691 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-303164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:38:22.131729  725691 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1115 10:38:22.135726  725691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:22.147377  725691 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:38:22.147500  725691 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:38:22.147552  725691 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:22.182490  725691 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:22.182517  725691 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:38:22.182572  725691 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:22.220442  725691 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:22.220466  725691 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:38:22.220475  725691 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1115 10:38:22.220567  725691 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-303164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:38:22.220654  725691 ssh_runner.go:195] Run: crio config
	I1115 10:38:22.295775  725691 cni.go:84] Creating CNI manager for ""
	I1115 10:38:22.295825  725691 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:38:22.295842  725691 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:38:22.295876  725691 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-303164 NodeName:default-k8s-diff-port-303164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:38:22.296045  725691 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-303164"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:38:22.296139  725691 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:38:22.304393  725691 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:38:22.304477  725691 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:38:22.311937  725691 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1115 10:38:22.324440  725691 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:38:22.337676  725691 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1115 10:38:22.351118  725691 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:38:22.355316  725691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:22.364921  725691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:22.514928  725691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:38:22.530967  725691 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164 for IP: 192.168.85.2
	I1115 10:38:22.530993  725691 certs.go:195] generating shared ca certs ...
	I1115 10:38:22.531010  725691 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:22.531140  725691 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:38:22.531189  725691 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:38:22.531202  725691 certs.go:257] generating profile certs ...
	I1115 10:38:22.531285  725691 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.key
	I1115 10:38:22.531385  725691 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key.44e49336
	I1115 10:38:22.531425  725691 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key
	I1115 10:38:22.531531  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:38:22.531569  725691 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:38:22.531582  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:38:22.531607  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:38:22.531632  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:38:22.531655  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:38:22.531705  725691 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:38:22.532716  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:38:22.570700  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:38:22.589472  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:38:22.617410  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:38:22.644795  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 10:38:22.698962  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:38:22.737489  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:38:22.794912  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:38:22.848801  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:38:22.881148  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:38:22.899661  725691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:38:22.918600  725691 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:38:22.934237  725691 ssh_runner.go:195] Run: openssl version
	I1115 10:38:22.940930  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:38:22.949855  725691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:22.961442  725691 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:22.961590  725691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:23.003650  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:38:23.013337  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:38:23.022479  725691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:38:23.026627  725691 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:38:23.026739  725691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:38:23.071279  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:38:23.079808  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:38:23.088687  725691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:38:23.092824  725691 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:38:23.092933  725691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:38:23.137999  725691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:38:23.146899  725691 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:38:23.151204  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:38:23.195362  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:38:23.237357  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:38:23.317117  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:38:23.383581  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:38:23.505540  725691 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:38:23.643970  725691 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-303164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-303164 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:38:23.644083  725691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:38:23.644166  725691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:38:23.718376  725691 cri.go:89] found id: "2c910a1bc98190b14a76fa88f7d5067fd7b09b18629ae3b1acf0e8f9394dac1f"
	I1115 10:38:23.718408  725691 cri.go:89] found id: "0530aabdcbb5a21f8ba0a88ad2e2bf5546365f9556577f075c171b1c817f1960"
	I1115 10:38:23.718416  725691 cri.go:89] found id: "a98fb964f4025f8c4a4027fd4b096cc84c2f581727a83f5729d88f17aa2c2b22"
	I1115 10:38:23.718421  725691 cri.go:89] found id: "6b4d8bfc8b089aa1a7d9c75dabaec5b65337237d2c6f29d3f00908a4c3dcd6bf"
	I1115 10:38:23.718429  725691 cri.go:89] found id: ""
	I1115 10:38:23.718493  725691 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:38:23.746955  725691 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:38:23Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:38:23.747054  725691 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:38:23.761854  725691 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:38:23.761876  725691 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:38:23.761939  725691 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:38:23.773988  725691 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:38:23.774430  725691 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-303164" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:38:23.774552  725691 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-514793/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-303164" cluster setting kubeconfig missing "default-k8s-diff-port-303164" context setting]
	I1115 10:38:23.774931  725691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:23.777326  725691 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:38:23.788529  725691 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1115 10:38:23.788602  725691 kubeadm.go:602] duration metric: took 26.719064ms to restartPrimaryControlPlane
	I1115 10:38:23.788637  725691 kubeadm.go:403] duration metric: took 144.685211ms to StartCluster
	I1115 10:38:23.788681  725691 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:23.788760  725691 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:38:23.789424  725691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:23.789761  725691 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:38:23.790123  725691 config.go:182] Loaded profile config "default-k8s-diff-port-303164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:23.790197  725691 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:38:23.790296  725691 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-303164"
	I1115 10:38:23.790323  725691 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-303164"
	W1115 10:38:23.790344  725691 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:38:23.790389  725691 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:38:23.790848  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:23.791051  725691 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-303164"
	I1115 10:38:23.791091  725691 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-303164"
	W1115 10:38:23.791115  725691 addons.go:248] addon dashboard should already be in state true
	I1115 10:38:23.791174  725691 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:38:23.791349  725691 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-303164"
	I1115 10:38:23.791363  725691 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-303164"
	I1115 10:38:23.791614  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:23.792070  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:23.805285  725691 out.go:179] * Verifying Kubernetes components...
	I1115 10:38:19.173514  726986 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:38:19.173785  726986 start.go:159] libmachine.API.Create for "auto-864099" (driver="docker")
	I1115 10:38:19.173825  726986 client.go:173] LocalClient.Create starting
	I1115 10:38:19.173878  726986 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem
	I1115 10:38:19.173908  726986 main.go:143] libmachine: Decoding PEM data...
	I1115 10:38:19.173921  726986 main.go:143] libmachine: Parsing certificate...
	I1115 10:38:19.173970  726986 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem
	I1115 10:38:19.173986  726986 main.go:143] libmachine: Decoding PEM data...
	I1115 10:38:19.173995  726986 main.go:143] libmachine: Parsing certificate...
	I1115 10:38:19.174363  726986 cli_runner.go:164] Run: docker network inspect auto-864099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:38:19.191724  726986 cli_runner.go:211] docker network inspect auto-864099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:38:19.191801  726986 network_create.go:284] running [docker network inspect auto-864099] to gather additional debugging logs...
	I1115 10:38:19.191817  726986 cli_runner.go:164] Run: docker network inspect auto-864099
	W1115 10:38:19.219353  726986 cli_runner.go:211] docker network inspect auto-864099 returned with exit code 1
	I1115 10:38:19.219379  726986 network_create.go:287] error running [docker network inspect auto-864099]: docker network inspect auto-864099: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-864099 not found
	I1115 10:38:19.219432  726986 network_create.go:289] output of [docker network inspect auto-864099]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-864099 not found
	
	** /stderr **
	I1115 10:38:19.219528  726986 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:38:19.242771  726986 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-03fcaf6cb6bf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:21:e0:cf:fc:c1} reservation:<nil>}
	I1115 10:38:19.243116  726986 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a5248bd30780 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:00:a1:23:de:dd} reservation:<nil>}
	I1115 10:38:19.243450  726986 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-aae071823fd3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1a:b9:7d:07:12:bf} reservation:<nil>}
	I1115 10:38:19.243850  726986 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195d1d0}
	I1115 10:38:19.243867  726986 network_create.go:124] attempt to create docker network auto-864099 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 10:38:19.243927  726986 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-864099 auto-864099
	I1115 10:38:19.328585  726986 network_create.go:108] docker network auto-864099 192.168.76.0/24 created
	I1115 10:38:19.328613  726986 kic.go:121] calculated static IP "192.168.76.2" for the "auto-864099" container
	I1115 10:38:19.328697  726986 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:38:19.349874  726986 cli_runner.go:164] Run: docker volume create auto-864099 --label name.minikube.sigs.k8s.io=auto-864099 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:38:19.370347  726986 oci.go:103] Successfully created a docker volume auto-864099
	I1115 10:38:19.370430  726986 cli_runner.go:164] Run: docker run --rm --name auto-864099-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-864099 --entrypoint /usr/bin/test -v auto-864099:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:38:20.024081  726986 oci.go:107] Successfully prepared a docker volume auto-864099
	I1115 10:38:20.024169  726986 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:38:20.024180  726986 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:38:20.024254  726986 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-864099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:38:23.813408  725691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:23.884672  725691 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-303164"
	W1115 10:38:23.884694  725691 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:38:23.884718  725691 host.go:66] Checking if "default-k8s-diff-port-303164" exists ...
	I1115 10:38:23.885124  725691 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-303164 --format={{.State.Status}}
	I1115 10:38:23.889659  725691 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:38:23.897582  725691 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:38:23.897828  725691 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:38:23.907738  725691 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:38:23.907763  725691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:38:23.907831  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:23.907991  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:38:23.907999  725691 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:38:23.908032  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:23.953895  725691 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:38:23.953916  725691 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:38:23.953980  725691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-303164
	I1115 10:38:23.997248  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:24.011092  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:24.023555  725691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/default-k8s-diff-port-303164/id_rsa Username:docker}
	I1115 10:38:24.225029  725691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:38:24.273379  725691 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-303164" to be "Ready" ...
	I1115 10:38:24.307994  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:38:24.308021  725691 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:38:24.223975  726986 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-864099:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.19968203s)
	I1115 10:38:24.224009  726986 kic.go:203] duration metric: took 4.199825189s to extract preloaded images to volume ...
	W1115 10:38:24.224140  726986 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1115 10:38:24.224289  726986 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:38:24.330797  726986 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-864099 --name auto-864099 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-864099 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-864099 --network auto-864099 --ip 192.168.76.2 --volume auto-864099:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:38:24.795146  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Running}}
	I1115 10:38:24.823175  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:38:24.849806  726986 cli_runner.go:164] Run: docker exec auto-864099 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:38:24.915409  726986 oci.go:144] the created container "auto-864099" has a running status.
	I1115 10:38:24.915435  726986 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa...
	I1115 10:38:25.286937  726986 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:38:25.316590  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:38:25.381838  726986 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:38:25.381859  726986 kic_runner.go:114] Args: [docker exec --privileged auto-864099 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:38:25.515222  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:38:25.547796  726986 machine.go:94] provisionDockerMachine start ...
	I1115 10:38:25.547901  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:25.591388  726986 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:25.591750  726986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 10:38:25.591761  726986 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:38:25.592548  726986 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1115 10:38:28.765392  726986 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-864099
	
	I1115 10:38:28.765458  726986 ubuntu.go:182] provisioning hostname "auto-864099"
	I1115 10:38:28.765563  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:28.791200  726986 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:28.791513  726986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 10:38:28.791525  726986 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-864099 && echo "auto-864099" | sudo tee /etc/hostname
	I1115 10:38:24.354693  725691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:38:24.365437  725691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:38:24.399243  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:38:24.399271  725691 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:38:24.562539  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:38:24.562568  725691 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:38:24.641852  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:38:24.641880  725691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1115 10:38:24.708054  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:38:24.708081  725691 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:38:24.737867  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:38:24.737890  725691 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:38:24.790359  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:38:24.790386  725691 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:38:24.828437  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:38:24.828467  725691 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:38:24.856533  725691 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:38:24.856560  725691 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:38:24.899278  725691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:38:30.178435  725691 node_ready.go:49] node "default-k8s-diff-port-303164" is "Ready"
	I1115 10:38:30.178472  725691 node_ready.go:38] duration metric: took 5.905033549s for node "default-k8s-diff-port-303164" to be "Ready" ...
	I1115 10:38:30.178489  725691 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:38:30.178550  725691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:38:30.491978  725691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.137249381s)
	I1115 10:38:32.424693  725691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.059217891s)
	I1115 10:38:32.424813  725691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.525500603s)
	I1115 10:38:32.424987  725691 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.246419974s)
	I1115 10:38:32.425010  725691 api_server.go:72] duration metric: took 8.635194321s to wait for apiserver process to appear ...
	I1115 10:38:32.425017  725691 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:38:32.425034  725691 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1115 10:38:32.428254  725691 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-303164 addons enable metrics-server
	
	I1115 10:38:32.431176  725691 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1115 10:38:28.998151  726986 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-864099
	
	I1115 10:38:28.998245  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:29.025808  726986 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:29.026126  726986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 10:38:29.026148  726986 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-864099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-864099/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-864099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:38:29.217615  726986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:38:29.217700  726986 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-514793/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-514793/.minikube}
	I1115 10:38:29.217753  726986 ubuntu.go:190] setting up certificates
	I1115 10:38:29.217782  726986 provision.go:84] configureAuth start
	I1115 10:38:29.217867  726986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-864099
	I1115 10:38:29.246780  726986 provision.go:143] copyHostCerts
	I1115 10:38:29.246843  726986 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem, removing ...
	I1115 10:38:29.246854  726986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem
	I1115 10:38:29.246931  726986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/ca.pem (1082 bytes)
	I1115 10:38:29.247013  726986 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem, removing ...
	I1115 10:38:29.247018  726986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem
	I1115 10:38:29.247042  726986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/cert.pem (1123 bytes)
	I1115 10:38:29.247094  726986 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem, removing ...
	I1115 10:38:29.247099  726986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem
	I1115 10:38:29.247121  726986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-514793/.minikube/key.pem (1675 bytes)
	I1115 10:38:29.247167  726986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem org=jenkins.auto-864099 san=[127.0.0.1 192.168.76.2 auto-864099 localhost minikube]
	I1115 10:38:29.873151  726986 provision.go:177] copyRemoteCerts
	I1115 10:38:29.873271  726986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:38:29.873345  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:29.891168  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.007521  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1115 10:38:30.030561  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1115 10:38:30.057067  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:38:30.084412  726986 provision.go:87] duration metric: took 866.588413ms to configureAuth
	I1115 10:38:30.084488  726986 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:38:30.084715  726986 config.go:182] Loaded profile config "auto-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:38:30.084880  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.117535  726986 main.go:143] libmachine: Using SSH client type: native
	I1115 10:38:30.117876  726986 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1115 10:38:30.117894  726986 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:38:30.448801  726986 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:38:30.448908  726986 machine.go:97] duration metric: took 4.901089868s to provisionDockerMachine
	I1115 10:38:30.448948  726986 client.go:176] duration metric: took 11.275115145s to LocalClient.Create
	I1115 10:38:30.449020  726986 start.go:167] duration metric: took 11.275231843s to libmachine.API.Create "auto-864099"
	I1115 10:38:30.449056  726986 start.go:293] postStartSetup for "auto-864099" (driver="docker")
	I1115 10:38:30.449079  726986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:38:30.449190  726986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:38:30.449254  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.472725  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.598713  726986 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:38:30.602911  726986 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:38:30.602954  726986 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:38:30.602966  726986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/addons for local assets ...
	I1115 10:38:30.603030  726986 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-514793/.minikube/files for local assets ...
	I1115 10:38:30.603129  726986 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem -> 5166372.pem in /etc/ssl/certs
	I1115 10:38:30.603261  726986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:38:30.612792  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:38:30.637797  726986 start.go:296] duration metric: took 188.713607ms for postStartSetup
	I1115 10:38:30.638238  726986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-864099
	I1115 10:38:30.656826  726986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/config.json ...
	I1115 10:38:30.657157  726986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:38:30.657224  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.675461  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.783877  726986 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:38:30.792200  726986 start.go:128] duration metric: took 11.62208685s to createHost
	I1115 10:38:30.792225  726986 start.go:83] releasing machines lock for "auto-864099", held for 11.622208201s
	I1115 10:38:30.792312  726986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-864099
	I1115 10:38:30.820536  726986 ssh_runner.go:195] Run: cat /version.json
	I1115 10:38:30.820589  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.821008  726986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:38:30.821070  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:38:30.869024  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.870637  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:38:30.985444  726986 ssh_runner.go:195] Run: systemctl --version
	I1115 10:38:31.127466  726986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:38:31.193216  726986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:38:31.198079  726986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:38:31.198159  726986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:38:31.250283  726986 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1115 10:38:31.250359  726986 start.go:496] detecting cgroup driver to use...
	I1115 10:38:31.250413  726986 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1115 10:38:31.250489  726986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:38:31.286390  726986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:38:31.305822  726986 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:38:31.305943  726986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:38:31.331452  726986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:38:31.361465  726986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:38:31.592119  726986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:38:31.785123  726986 docker.go:234] disabling docker service ...
	I1115 10:38:31.785208  726986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:38:31.821227  726986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:38:31.844725  726986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:38:32.028187  726986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:38:32.246072  726986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:38:32.261709  726986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:38:32.290959  726986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:38:32.291071  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.305237  726986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1115 10:38:32.305308  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.325323  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.336329  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.345222  726986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:38:32.357890  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.370205  726986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.388140  726986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:38:32.396830  726986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:38:32.406428  726986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:38:32.422108  726986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:32.608099  726986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:38:32.768989  726986 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:38:32.769083  726986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:38:32.772895  726986 start.go:564] Will wait 60s for crictl version
	I1115 10:38:32.772966  726986 ssh_runner.go:195] Run: which crictl
	I1115 10:38:32.776744  726986 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:38:32.809255  726986 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:38:32.809360  726986 ssh_runner.go:195] Run: crio --version
	I1115 10:38:32.848450  726986 ssh_runner.go:195] Run: crio --version
	I1115 10:38:32.890904  726986 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:38:32.893935  726986 cli_runner.go:164] Run: docker network inspect auto-864099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:38:32.911081  726986 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:38:32.915190  726986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:32.931172  726986 kubeadm.go:884] updating cluster {Name:auto-864099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:38:32.931297  726986 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:38:32.931350  726986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:32.970343  726986 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:32.970363  726986 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:38:32.970416  726986 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:38:33.003846  726986 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:38:33.003941  726986 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:38:33.003964  726986 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:38:33.004128  726986 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-864099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:38:33.004288  726986 ssh_runner.go:195] Run: crio config
	I1115 10:38:33.059603  726986 cni.go:84] Creating CNI manager for ""
	I1115 10:38:33.059626  726986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:38:33.059639  726986 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:38:33.059679  726986 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-864099 NodeName:auto-864099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:38:33.059861  726986 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-864099"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:38:33.059940  726986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:38:33.067554  726986 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:38:33.067622  726986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:38:33.074895  726986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1115 10:38:33.087208  726986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:38:33.100705  726986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1115 10:38:33.113165  726986 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:38:33.116837  726986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:38:33.126287  726986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:38:33.282287  726986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:38:33.300158  726986 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099 for IP: 192.168.76.2
	I1115 10:38:33.300176  726986 certs.go:195] generating shared ca certs ...
	I1115 10:38:33.300192  726986 certs.go:227] acquiring lock for ca certs: {Name:mk6f3994573a0b35238f645d1c65b992afed6f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:33.300346  726986 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key
	I1115 10:38:33.300388  726986 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key
	I1115 10:38:33.300403  726986 certs.go:257] generating profile certs ...
	I1115 10:38:33.300485  726986 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.key
	I1115 10:38:33.300496  726986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt with IP's: []
	I1115 10:38:32.433952  725691 addons.go:515] duration metric: took 8.643744284s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1115 10:38:32.438688  725691 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1115 10:38:32.440427  725691 api_server.go:141] control plane version: v1.34.1
	I1115 10:38:32.440456  725691 api_server.go:131] duration metric: took 15.431938ms to wait for apiserver health ...
	I1115 10:38:32.440466  725691 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:38:32.448511  725691 system_pods.go:59] 8 kube-system pods found
	I1115 10:38:32.448550  725691 system_pods.go:61] "coredns-66bc5c9577-97gv6" [b6f9a65e-75c6-4783-a879-1dfc86407862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:38:32.448561  725691 system_pods.go:61] "etcd-default-k8s-diff-port-303164" [4eb09433-dbaa-4753-aad2-8452321e45a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:38:32.448568  725691 system_pods.go:61] "kindnet-rph85" [30ef2b98-29f3-4a7e-a041-5a6bd98c92ef] Running
	I1115 10:38:32.448575  725691 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-303164" [04835349-0a82-4a74-9ed1-9032f3bfabef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:38:32.448589  725691 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-303164" [cfdb7882-766a-463b-a480-f6ee60cb718f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:38:32.448598  725691 system_pods.go:61] "kube-proxy-vmnnc" [e61077d0-3c58-4094-ad7e-436ec2f7fb3f] Running
	I1115 10:38:32.448606  725691 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-303164" [8c9a46a5-0f1d-496c-8b18-40544a608356] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:38:32.448610  725691 system_pods.go:61] "storage-provisioner" [344be432-6b85-4dea-a1a0-54ce0079d253] Running
	I1115 10:38:32.448617  725691 system_pods.go:74] duration metric: took 8.141458ms to wait for pod list to return data ...
	I1115 10:38:32.448628  725691 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:38:32.453243  725691 default_sa.go:45] found service account: "default"
	I1115 10:38:32.453273  725691 default_sa.go:55] duration metric: took 4.637581ms for default service account to be created ...
	I1115 10:38:32.453283  725691 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:38:32.549028  725691 system_pods.go:86] 8 kube-system pods found
	I1115 10:38:32.549115  725691 system_pods.go:89] "coredns-66bc5c9577-97gv6" [b6f9a65e-75c6-4783-a879-1dfc86407862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:38:32.549143  725691 system_pods.go:89] "etcd-default-k8s-diff-port-303164" [4eb09433-dbaa-4753-aad2-8452321e45a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:38:32.549184  725691 system_pods.go:89] "kindnet-rph85" [30ef2b98-29f3-4a7e-a041-5a6bd98c92ef] Running
	I1115 10:38:32.549216  725691 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-303164" [04835349-0a82-4a74-9ed1-9032f3bfabef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:38:32.549243  725691 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-303164" [cfdb7882-766a-463b-a480-f6ee60cb718f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:38:32.549276  725691 system_pods.go:89] "kube-proxy-vmnnc" [e61077d0-3c58-4094-ad7e-436ec2f7fb3f] Running
	I1115 10:38:32.549302  725691 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-303164" [8c9a46a5-0f1d-496c-8b18-40544a608356] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:38:32.549326  725691 system_pods.go:89] "storage-provisioner" [344be432-6b85-4dea-a1a0-54ce0079d253] Running
	I1115 10:38:32.549366  725691 system_pods.go:126] duration metric: took 96.07623ms to wait for k8s-apps to be running ...
	I1115 10:38:32.549393  725691 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:38:32.549480  725691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:38:32.590116  725691 system_svc.go:56] duration metric: took 40.712962ms WaitForService to wait for kubelet
	I1115 10:38:32.590194  725691 kubeadm.go:587] duration metric: took 8.800375917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:38:32.590227  725691 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:38:32.594738  725691 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1115 10:38:32.594819  725691 node_conditions.go:123] node cpu capacity is 2
	I1115 10:38:32.594845  725691 node_conditions.go:105] duration metric: took 4.598952ms to run NodePressure ...
	I1115 10:38:32.594871  725691 start.go:242] waiting for startup goroutines ...
	I1115 10:38:32.594910  725691 start.go:247] waiting for cluster config update ...
	I1115 10:38:32.594937  725691 start.go:256] writing updated cluster config ...
	I1115 10:38:32.595296  725691 ssh_runner.go:195] Run: rm -f paused
	I1115 10:38:32.599419  725691 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:38:32.648045  725691 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-97gv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:38:34.410122  726986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt ...
	I1115 10:38:34.410154  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: {Name:mk72ef7aa2e4f5c07d0deafadc796b25165e3def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:34.410395  726986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.key ...
	I1115 10:38:34.410411  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.key: {Name:mke3768194deaa0353f27df285299cb4cf39a568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:34.410513  726986 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key.12308ab4
	I1115 10:38:34.410532  726986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt.12308ab4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 10:38:35.251490  726986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt.12308ab4 ...
	I1115 10:38:35.251521  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt.12308ab4: {Name:mke056e375cfbf899d519da388cae11f8b474a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:35.251735  726986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key.12308ab4 ...
	I1115 10:38:35.251751  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key.12308ab4: {Name:mk0a7818a031e606695d20b1279ae9829b9f1433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:35.251850  726986 certs.go:382] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt.12308ab4 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt
	I1115 10:38:35.251932  726986 certs.go:386] copying /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key.12308ab4 -> /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key
	I1115 10:38:35.251992  726986 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.key
	I1115 10:38:35.252005  726986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.crt with IP's: []
	I1115 10:38:35.907210  726986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.crt ...
	I1115 10:38:35.907241  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.crt: {Name:mkafd0da0cbfee412fbce111b94c9d89ae9707e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:35.907431  726986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.key ...
	I1115 10:38:35.907444  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.key: {Name:mk1c58da140cd795e8b2d8ac2422e39b840bc96f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:38:35.907622  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem (1338 bytes)
	W1115 10:38:35.907665  726986 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637_empty.pem, impossibly tiny 0 bytes
	I1115 10:38:35.907678  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca-key.pem (1679 bytes)
	I1115 10:38:35.907704  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:38:35.907734  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:38:35.907763  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/certs/key.pem (1675 bytes)
	I1115 10:38:35.907814  726986 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem (1708 bytes)
	I1115 10:38:35.908526  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:38:35.927276  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:38:35.946192  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:38:35.964089  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:38:35.981293  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1115 10:38:35.999750  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:38:36.024106  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:38:36.042803  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:38:36.062599  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/certs/516637.pem --> /usr/share/ca-certificates/516637.pem (1338 bytes)
	I1115 10:38:36.080800  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/ssl/certs/5166372.pem --> /usr/share/ca-certificates/5166372.pem (1708 bytes)
	I1115 10:38:36.098690  726986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:38:36.116609  726986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:38:36.129769  726986 ssh_runner.go:195] Run: openssl version
	I1115 10:38:36.135963  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516637.pem && ln -fs /usr/share/ca-certificates/516637.pem /etc/ssl/certs/516637.pem"
	I1115 10:38:36.144257  726986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516637.pem
	I1115 10:38:36.147813  726986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:39 /usr/share/ca-certificates/516637.pem
	I1115 10:38:36.147877  726986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516637.pem
	I1115 10:38:36.190385  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516637.pem /etc/ssl/certs/51391683.0"
	I1115 10:38:36.198915  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5166372.pem && ln -fs /usr/share/ca-certificates/5166372.pem /etc/ssl/certs/5166372.pem"
	I1115 10:38:36.207725  726986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5166372.pem
	I1115 10:38:36.211465  726986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:39 /usr/share/ca-certificates/5166372.pem
	I1115 10:38:36.211529  726986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5166372.pem
	I1115 10:38:36.252881  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5166372.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:38:36.261237  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:38:36.268993  726986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:36.272563  726986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:36.272622  726986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:38:36.313880  726986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:38:36.324093  726986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:38:36.339423  726986 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:38:36.339494  726986 kubeadm.go:401] StartCluster: {Name:auto-864099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-864099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:38:36.339577  726986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:38:36.339661  726986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:38:36.403085  726986 cri.go:89] found id: ""
	I1115 10:38:36.403187  726986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:38:36.415069  726986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:38:36.423222  726986 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:38:36.423296  726986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:38:36.435194  726986 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:38:36.435213  726986 kubeadm.go:158] found existing configuration files:
	
	I1115 10:38:36.435272  726986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:38:36.446823  726986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:38:36.446892  726986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:38:36.455564  726986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:38:36.464362  726986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:38:36.464440  726986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:38:36.472500  726986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:38:36.481344  726986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:38:36.481449  726986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:38:36.489130  726986 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:38:36.497131  726986 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:38:36.497199  726986 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:38:36.505382  726986 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:38:36.547697  726986 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:38:36.548052  726986 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:38:36.572808  726986 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:38:36.572887  726986 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1115 10:38:36.572929  726986 kubeadm.go:319] OS: Linux
	I1115 10:38:36.572981  726986 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:38:36.573037  726986 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1115 10:38:36.573089  726986 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:38:36.573144  726986 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:38:36.573199  726986 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:38:36.573254  726986 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:38:36.573306  726986 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:38:36.573359  726986 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:38:36.573412  726986 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1115 10:38:36.661994  726986 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:38:36.662112  726986 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:38:36.662214  726986 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:38:36.672937  726986 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:38:36.678465  726986 out.go:252]   - Generating certificates and keys ...
	I1115 10:38:36.678558  726986 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:38:36.678633  726986 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:38:37.018019  726986 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:38:37.105295  726986 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:38:37.725158  726986 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:38:38.729483  726986 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1115 10:38:34.660658  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:36.666934  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:39.200967  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:38.892411  726986 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:38:38.892624  726986 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-864099 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:38:39.669238  726986 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:38:39.669381  726986 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-864099 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:38:40.034451  726986 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:38:40.918211  726986 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:38:41.726276  726986 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:38:41.726405  726986 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:38:42.280158  726986 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:38:43.196635  726986 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:38:43.692552  726986 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	W1115 10:38:41.654614  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:44.155005  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:44.480869  726986 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:38:44.846901  726986 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:38:44.847080  726986 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:38:44.851980  726986 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:38:44.855450  726986 out.go:252]   - Booting up control plane ...
	I1115 10:38:44.855831  726986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:38:44.855978  726986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:38:44.859621  726986 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:38:44.901384  726986 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:38:44.901498  726986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:38:44.912302  726986 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:38:44.912939  726986 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:38:44.913018  726986 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:38:45.130060  726986 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:38:45.130283  726986 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:38:47.133965  726986 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000819514s
	I1115 10:38:47.134146  726986 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:38:47.134275  726986 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 10:38:47.134412  726986 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:38:47.134529  726986 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1115 10:38:46.162464  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:48.165890  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:51.978574  726986 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.844253712s
	I1115 10:38:53.678172  726986 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.544437788s
	W1115 10:38:50.170285  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:52.653105  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:54.636480  726986 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.502541168s
	I1115 10:38:54.664732  726986 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:38:54.694533  726986 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:38:54.709174  726986 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:38:54.709406  726986 kubeadm.go:319] [mark-control-plane] Marking the node auto-864099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:38:54.726149  726986 kubeadm.go:319] [bootstrap-token] Using token: nl7m3s.d5tp1el1kgavz79t
	I1115 10:38:54.729037  726986 out.go:252]   - Configuring RBAC rules ...
	I1115 10:38:54.729160  726986 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:38:54.746104  726986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:38:54.759545  726986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:38:54.763853  726986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:38:54.771599  726986 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:38:54.777904  726986 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:38:55.043989  726986 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:38:55.516434  726986 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:38:56.043715  726986 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:38:56.045110  726986 kubeadm.go:319] 
	I1115 10:38:56.045227  726986 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:38:56.045241  726986 kubeadm.go:319] 
	I1115 10:38:56.045319  726986 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:38:56.045324  726986 kubeadm.go:319] 
	I1115 10:38:56.045349  726986 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:38:56.045408  726986 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:38:56.045470  726986 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:38:56.045476  726986 kubeadm.go:319] 
	I1115 10:38:56.045531  726986 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:38:56.045536  726986 kubeadm.go:319] 
	I1115 10:38:56.045584  726986 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:38:56.045618  726986 kubeadm.go:319] 
	I1115 10:38:56.045673  726986 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:38:56.045749  726986 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:38:56.045817  726986 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:38:56.045822  726986 kubeadm.go:319] 
	I1115 10:38:56.045907  726986 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:38:56.045984  726986 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:38:56.045989  726986 kubeadm.go:319] 
	I1115 10:38:56.046073  726986 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nl7m3s.d5tp1el1kgavz79t \
	I1115 10:38:56.046177  726986 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 \
	I1115 10:38:56.046198  726986 kubeadm.go:319] 	--control-plane 
	I1115 10:38:56.046202  726986 kubeadm.go:319] 
	I1115 10:38:56.046288  726986 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:38:56.046293  726986 kubeadm.go:319] 
	I1115 10:38:56.046375  726986 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nl7m3s.d5tp1el1kgavz79t \
	I1115 10:38:56.046478  726986 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b08a480347ff283eb676e51d7a3b78a83e789b9e4ed3b8a299d9c069808ada34 
	I1115 10:38:56.050734  726986 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1115 10:38:56.050976  726986 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1115 10:38:56.051092  726986 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:38:56.051116  726986 cni.go:84] Creating CNI manager for ""
	I1115 10:38:56.051124  726986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:38:56.056112  726986 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:38:56.058944  726986 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:38:56.064567  726986 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:38:56.064640  726986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:38:56.079735  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:38:56.417521  726986 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:38:56.417679  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:56.417767  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-864099 minikube.k8s.io/updated_at=2025_11_15T10_38_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=auto-864099 minikube.k8s.io/primary=true
	I1115 10:38:56.434615  726986 ops.go:34] apiserver oom_adj: -16
	I1115 10:38:56.576916  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:57.077559  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:57.577839  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:58.077038  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:58.577414  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1115 10:38:54.653500  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:57.156170  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:38:59.156437  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:38:59.077585  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:38:59.577760  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:39:00.104615  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:39:00.577250  726986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:39:00.727780  726986 kubeadm.go:1114] duration metric: took 4.310134337s to wait for elevateKubeSystemPrivileges
	I1115 10:39:00.727809  726986 kubeadm.go:403] duration metric: took 24.388318783s to StartCluster
	I1115 10:39:00.727826  726986 settings.go:142] acquiring lock: {Name:mkb2db65b0d34eb8d179ff090fd6ad0ff8c5e49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:39:00.727892  726986 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:39:00.728906  726986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/kubeconfig: {Name:mk1d2adae7284385e06148a96913c150b56b1317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:39:00.729149  726986 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:39:00.729252  726986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:39:00.729560  726986 config.go:182] Loaded profile config "auto-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:39:00.729643  726986 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:39:00.729725  726986 addons.go:70] Setting storage-provisioner=true in profile "auto-864099"
	I1115 10:39:00.729747  726986 addons.go:239] Setting addon storage-provisioner=true in "auto-864099"
	I1115 10:39:00.729778  726986 host.go:66] Checking if "auto-864099" exists ...
	I1115 10:39:00.730462  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:39:00.730710  726986 addons.go:70] Setting default-storageclass=true in profile "auto-864099"
	I1115 10:39:00.730735  726986 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-864099"
	I1115 10:39:00.730986  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:39:00.735384  726986 out.go:179] * Verifying Kubernetes components...
	I1115 10:39:00.738434  726986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:39:00.798355  726986 addons.go:239] Setting addon default-storageclass=true in "auto-864099"
	I1115 10:39:00.798402  726986 host.go:66] Checking if "auto-864099" exists ...
	I1115 10:39:00.798824  726986 cli_runner.go:164] Run: docker container inspect auto-864099 --format={{.State.Status}}
	I1115 10:39:00.799451  726986 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:39:00.802665  726986 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:39:00.802685  726986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:39:00.802754  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:39:00.836445  726986 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:39:00.836473  726986 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:39:00.836542  726986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-864099
	I1115 10:39:00.845858  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:39:00.865217  726986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/auto-864099/id_rsa Username:docker}
	I1115 10:39:01.222549  726986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:39:01.222562  726986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:39:01.223561  726986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:39:01.363757  726986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:39:01.791160  726986 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1115 10:39:01.793052  726986 node_ready.go:35] waiting up to 15m0s for node "auto-864099" to be "Ready" ...
	I1115 10:39:02.091798  726986 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 10:39:02.094843  726986 addons.go:515] duration metric: took 1.365174899s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1115 10:39:02.296895  726986 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-864099" context rescaled to 1 replicas
	W1115 10:39:03.796135  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:01.158465  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	W1115 10:39:03.653471  725691 pod_ready.go:104] pod "coredns-66bc5c9577-97gv6" is not "Ready", error: <nil>
	I1115 10:39:04.152932  725691 pod_ready.go:94] pod "coredns-66bc5c9577-97gv6" is "Ready"
	I1115 10:39:04.152962  725691 pod_ready.go:86] duration metric: took 31.504889197s for pod "coredns-66bc5c9577-97gv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.155653  725691 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.160189  725691 pod_ready.go:94] pod "etcd-default-k8s-diff-port-303164" is "Ready"
	I1115 10:39:04.160215  725691 pod_ready.go:86] duration metric: took 4.534265ms for pod "etcd-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.162379  725691 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.167078  725691 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-303164" is "Ready"
	I1115 10:39:04.167109  725691 pod_ready.go:86] duration metric: took 4.701718ms for pod "kube-apiserver-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.169747  725691 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.351806  725691 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-303164" is "Ready"
	I1115 10:39:04.351836  725691 pod_ready.go:86] duration metric: took 182.060358ms for pod "kube-controller-manager-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.551929  725691 pod_ready.go:83] waiting for pod "kube-proxy-vmnnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:04.951065  725691 pod_ready.go:94] pod "kube-proxy-vmnnc" is "Ready"
	I1115 10:39:04.951095  725691 pod_ready.go:86] duration metric: took 399.141119ms for pod "kube-proxy-vmnnc" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:05.151042  725691 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:05.551956  725691 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-303164" is "Ready"
	I1115 10:39:05.551984  725691 pod_ready.go:86] duration metric: took 400.909495ms for pod "kube-scheduler-default-k8s-diff-port-303164" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:39:05.551998  725691 pod_ready.go:40] duration metric: took 32.952551124s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:39:05.628724  725691 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1115 10:39:05.631691  725691 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-303164" cluster and "default" namespace by default
	W1115 10:39:05.802197  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:08.296682  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:10.796088  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:13.296605  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:15.796558  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	W1115 10:39:18.295786  726986 node_ready.go:57] node "auto-864099" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 15 10:38:59 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:38:59.107961127Z" level=info msg="Removed container 378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8/dashboard-metrics-scraper" id=18be7d6a-36a7-4bc6-9e0c-883eb1b430ac name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:39:01 default-k8s-diff-port-303164 conmon[1147]: conmon 5cb75eb11bbd0b60da9e <ninfo>: container 1157 exited with status 1
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.142145601Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9a9da681-1dd7-4c57-8859-8ae059f4aa25 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.147090357Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b1a59432-e179-453f-9d53-7075b951cdce name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.14867469Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=96ff6042-7fc2-49bc-8b75-b6132fe6c65b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.14879403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.163755671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.163981765Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8e7cc255d600cba8962f7e6b90dc0ccb4b8fa0a12cf1be3c72e20d9788d3c687/merged/etc/passwd: no such file or directory"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.164013083Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8e7cc255d600cba8962f7e6b90dc0ccb4b8fa0a12cf1be3c72e20d9788d3c687/merged/etc/group: no such file or directory"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.164382427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.18879369Z" level=info msg="Created container 7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce: kube-system/storage-provisioner/storage-provisioner" id=96ff6042-7fc2-49bc-8b75-b6132fe6c65b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.190064334Z" level=info msg="Starting container: 7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce" id=4ec1e593-050d-4117-94ef-3c60e37b177d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:39:02 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:02.19179414Z" level=info msg="Started container" PID=1638 containerID=7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce description=kube-system/storage-provisioner/storage-provisioner id=4ec1e593-050d-4117-94ef-3c60e37b177d name=/runtime.v1.RuntimeService/StartContainer sandboxID=15e83ebe97d8004a4abacd8eae8bc0cb217e01894a903c6442ec2ce84dbdde08
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.710968077Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.714823427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.714987501Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.71508079Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.718509002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.718544472Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.718571097Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.721714082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.721745638Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.721769571Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.724756458Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:39:11 default-k8s-diff-port-303164 crio[651]: time="2025-11-15T10:39:11.724790074Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7566f2a7beef6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   15e83ebe97d80       storage-provisioner                                    kube-system
	cf76876a687d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   18b8546920675       dashboard-metrics-scraper-6ffb444bf9-lhct8             kubernetes-dashboard
	2e643fed5aa28       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   436c86a6c86ce       kubernetes-dashboard-855c9754f9-4mmm8                  kubernetes-dashboard
	68cbd4128a0f6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   3bb342a56cfa5       busybox                                                default
	eb40357445059       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   09609883975c6       coredns-66bc5c9577-97gv6                               kube-system
	acc8eca44366a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   69ee271d8f824       kube-proxy-vmnnc                                       kube-system
	5cb75eb11bbd0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   15e83ebe97d80       storage-provisioner                                    kube-system
	f55f11e9f4617       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   89d73a963f244       kindnet-rph85                                          kube-system
	2c910a1bc9819       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago       Running             kube-scheduler              1                   7f9235846f2c4       kube-scheduler-default-k8s-diff-port-303164            kube-system
	0530aabdcbb5a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago       Running             kube-apiserver              1                   6d8f2f626aa27       kube-apiserver-default-k8s-diff-port-303164            kube-system
	a98fb964f4025       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d4ac7937a63f0       etcd-default-k8s-diff-port-303164                      kube-system
	6b4d8bfc8b089       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   e2d720b979ca4       kube-controller-manager-default-k8s-diff-port-303164   kube-system
	
	
	==> coredns [eb40357445059cc14c5f7b7baf983424338a1f3a04ec773e4e548001a06069e0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47176 - 28927 "HINFO IN 2312371740486467996.6041860029921088659. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021668448s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-303164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-303164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=default-k8s-diff-port-303164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_36_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:36:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-303164
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:39:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:39:11 +0000   Sat, 15 Nov 2025 10:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:39:11 +0000   Sat, 15 Nov 2025 10:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:39:11 +0000   Sat, 15 Nov 2025 10:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:39:11 +0000   Sat, 15 Nov 2025 10:37:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-303164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                4f8ed4eb-3c24-41b5-a3a9-de151f112693
	  Boot ID:                    be4dbfeb-291b-4c95-81ce-a1385d3adea5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-97gv6                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-default-k8s-diff-port-303164                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-rph85                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-303164             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-303164    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-vmnnc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-default-k8s-diff-port-303164             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lhct8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4mmm8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m18s              kube-proxy       
	  Normal   Starting                 50s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m25s              kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m25s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s              kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s              kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m25s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m20s              node-controller  Node default-k8s-diff-port-303164 event: Registered Node default-k8s-diff-port-303164 in Controller
	  Normal   NodeReady                98s                kubelet          Node default-k8s-diff-port-303164 status is now: NodeReady
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-303164 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                node-controller  Node default-k8s-diff-port-303164 event: Registered Node default-k8s-diff-port-303164 in Controller
	
	
	==> dmesg <==
	[Nov15 10:16] overlayfs: idmapped layers are currently not supported
	[ +16.201490] overlayfs: idmapped layers are currently not supported
	[Nov15 10:17] overlayfs: idmapped layers are currently not supported
	[Nov15 10:18] overlayfs: idmapped layers are currently not supported
	[Nov15 10:19] overlayfs: idmapped layers are currently not supported
	[Nov15 10:20] overlayfs: idmapped layers are currently not supported
	[Nov15 10:22] overlayfs: idmapped layers are currently not supported
	[Nov15 10:24] overlayfs: idmapped layers are currently not supported
	[ +34.764345] overlayfs: idmapped layers are currently not supported
	[Nov15 10:26] overlayfs: idmapped layers are currently not supported
	[Nov15 10:28] overlayfs: idmapped layers are currently not supported
	[Nov15 10:29] overlayfs: idmapped layers are currently not supported
	[Nov15 10:30] overlayfs: idmapped layers are currently not supported
	[ +22.889231] overlayfs: idmapped layers are currently not supported
	[Nov15 10:31] overlayfs: idmapped layers are currently not supported
	[Nov15 10:32] overlayfs: idmapped layers are currently not supported
	[Nov15 10:33] overlayfs: idmapped layers are currently not supported
	[Nov15 10:34] overlayfs: idmapped layers are currently not supported
	[Nov15 10:35] overlayfs: idmapped layers are currently not supported
	[ +45.222836] overlayfs: idmapped layers are currently not supported
	[Nov15 10:36] overlayfs: idmapped layers are currently not supported
	[Nov15 10:37] overlayfs: idmapped layers are currently not supported
	[Nov15 10:38] overlayfs: idmapped layers are currently not supported
	[ +20.770485] overlayfs: idmapped layers are currently not supported
	[ +24.092912] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a98fb964f4025f8c4a4027fd4b096cc84c2f581727a83f5729d88f17aa2c2b22] <==
	{"level":"warn","ts":"2025-11-15T10:38:28.361262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.381003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.399140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.420778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.435671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.448340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.464676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.486384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.498820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.521683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.532319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.557806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.573726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.611460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.620673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.657088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.671761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.706866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.725575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.745919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.770072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.800875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.830042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.838220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:38:28.920235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41422","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:39:23 up  5:21,  0 user,  load average: 3.58, 3.69, 3.12
	Linux default-k8s-diff-port-303164 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f55f11e9f461788084a143dcfa22c6414008456df58d4f0cfdfcfdea76b378d2] <==
	I1115 10:38:31.473889       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:38:31.474124       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:38:31.474239       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:38:31.474250       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:38:31.474260       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:38:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:38:31.718311       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:38:31.718338       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:38:31.718347       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:38:31.718663       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:39:01.711174       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:39:01.718975       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:39:01.719101       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:39:01.719214       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:39:03.118490       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:39:03.118522       1 metrics.go:72] Registering metrics
	I1115 10:39:03.118571       1 controller.go:711] "Syncing nftables rules"
	I1115 10:39:11.710625       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:39:11.710684       1 main.go:301] handling current node
	I1115 10:39:21.717794       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:39:21.717827       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0530aabdcbb5a21f8ba0a88ad2e2bf5546365f9556577f075c171b1c817f1960] <==
	I1115 10:38:30.222774       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:38:30.222958       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:38:30.237383       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:38:30.237838       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:38:30.237865       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:38:30.237874       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:38:30.237881       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:38:30.238025       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:38:30.238071       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1115 10:38:30.247595       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:38:30.300412       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:38:30.303150       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:38:30.303178       1 policy_source.go:240] refreshing policies
	I1115 10:38:30.374511       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:38:30.759266       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:38:30.946146       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:38:31.738797       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:38:31.937533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:38:32.028786       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:38:32.081587       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:38:32.338993       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.120.165"}
	I1115 10:38:32.405281       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.28.103"}
	I1115 10:38:34.638529       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:38:34.984288       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:38:35.216984       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6b4d8bfc8b089aa1a7d9c75dabaec5b65337237d2c6f29d3f00908a4c3dcd6bf] <==
	I1115 10:38:34.644276       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:38:34.648331       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:38:34.651160       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:38:34.651288       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:38:34.652485       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:38:34.657817       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:38:34.659980       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:38:34.663168       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:38:34.667408       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:38:34.669652       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:38:34.674015       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:38:34.675751       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:38:34.677878       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:38:34.678075       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:38:34.678520       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-303164"
	I1115 10:38:34.678613       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:38:34.677979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:38:34.677946       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:38:34.677959       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:38:34.677969       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:38:34.680568       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:38:34.685888       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:38:34.701688       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:38:34.701764       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:38:34.701797       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [acc8eca44366ae83668276140d7ec0a035ccf8963b6889fe220fec65c5943fe4] <==
	I1115 10:38:32.212584       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:38:32.534759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:38:32.634996       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:38:32.635078       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:38:32.635166       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:38:32.688135       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:38:32.688254       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:38:32.711665       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:38:32.712061       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:38:32.712228       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:38:32.713577       1 config.go:200] "Starting service config controller"
	I1115 10:38:32.713817       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:38:32.713881       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:38:32.713935       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:38:32.713973       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:38:32.713999       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:38:32.714844       1 config.go:309] "Starting node config controller"
	I1115 10:38:32.714894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:38:32.714924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:38:32.814793       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:38:32.814832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:38:32.814874       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c910a1bc98190b14a76fa88f7d5067fd7b09b18629ae3b1acf0e8f9394dac1f] <==
	I1115 10:38:26.890625       1 serving.go:386] Generated self-signed cert in-memory
	W1115 10:38:30.103128       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 10:38:30.103251       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 10:38:30.103308       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 10:38:30.103346       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 10:38:30.243263       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:38:30.243383       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:38:30.252154       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:38:30.252369       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:38:30.254755       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:38:30.254848       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:38:30.356278       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:38:35 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:35.198214     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flvw\" (UniqueName: \"kubernetes.io/projected/6d9ff063-ba22-4965-aa68-699ee10b68f9-kube-api-access-2flvw\") pod \"dashboard-metrics-scraper-6ffb444bf9-lhct8\" (UID: \"6d9ff063-ba22-4965-aa68-699ee10b68f9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8"
	Nov 15 10:38:35 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:35.198552     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6d9ff063-ba22-4965-aa68-699ee10b68f9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-lhct8\" (UID: \"6d9ff063-ba22-4965-aa68-699ee10b68f9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8"
	Nov 15 10:38:35 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:35.198738     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmwb\" (UniqueName: \"kubernetes.io/projected/794a2f16-96be-4a3a-822c-5499be15dc22-kube-api-access-rjmwb\") pod \"kubernetes-dashboard-855c9754f9-4mmm8\" (UID: \"794a2f16-96be-4a3a-822c-5499be15dc22\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mmm8"
	Nov 15 10:38:35 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:35.198883     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/794a2f16-96be-4a3a-822c-5499be15dc22-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4mmm8\" (UID: \"794a2f16-96be-4a3a-822c-5499be15dc22\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mmm8"
	Nov 15 10:38:36 default-k8s-diff-port-303164 kubelet[779]: W1115 10:38:36.365943     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/crio-18b8546920675846bcd65769a3b292beeaceb064e4e6319525af1889c3f6958e WatchSource:0}: Error finding container 18b8546920675846bcd65769a3b292beeaceb064e4e6319525af1889c3f6958e: Status 404 returned error can't find the container with id 18b8546920675846bcd65769a3b292beeaceb064e4e6319525af1889c3f6958e
	Nov 15 10:38:36 default-k8s-diff-port-303164 kubelet[779]: W1115 10:38:36.385411     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/41c6c089346a2357e656ce1bec89562296a315677a136901b9744273006d70ec/crio-436c86a6c86ce5e8cebea3030ec623a90322ba61a28497a4b12936aae806638f WatchSource:0}: Error finding container 436c86a6c86ce5e8cebea3030ec623a90322ba61a28497a4b12936aae806638f: Status 404 returned error can't find the container with id 436c86a6c86ce5e8cebea3030ec623a90322ba61a28497a4b12936aae806638f
	Nov 15 10:38:43 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:43.023243     779 scope.go:117] "RemoveContainer" containerID="cfcc8877bc9073350007f14fba989d5e0e8aa8164ff56e32288faf68ece14947"
	Nov 15 10:38:44 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:44.027228     779 scope.go:117] "RemoveContainer" containerID="cfcc8877bc9073350007f14fba989d5e0e8aa8164ff56e32288faf68ece14947"
	Nov 15 10:38:44 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:44.027521     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:44 default-k8s-diff-port-303164 kubelet[779]: E1115 10:38:44.027669     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:38:45 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:45.032465     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:45 default-k8s-diff-port-303164 kubelet[779]: E1115 10:38:45.032640     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:38:46 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:46.308549     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:46 default-k8s-diff-port-303164 kubelet[779]: E1115 10:38:46.308724     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:38:58 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:58.854820     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:59 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:59.072889     779 scope.go:117] "RemoveContainer" containerID="378ea13270ae3d67af9cc1866fcfe99a73ed7ef506e094618cae4bc12d79d801"
	Nov 15 10:38:59 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:59.073113     779 scope.go:117] "RemoveContainer" containerID="cf76876a687d20a38bd839d035595084bd6d94c1f4dfe4203497cd9f62dfc593"
	Nov 15 10:38:59 default-k8s-diff-port-303164 kubelet[779]: E1115 10:38:59.073304     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:38:59 default-k8s-diff-port-303164 kubelet[779]: I1115 10:38:59.106273     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mmm8" podStartSLOduration=10.932090506 podStartE2EDuration="24.106254582s" podCreationTimestamp="2025-11-15 10:38:35 +0000 UTC" firstStartedPulling="2025-11-15 10:38:36.393488182 +0000 UTC m=+13.852619005" lastFinishedPulling="2025-11-15 10:38:49.567652258 +0000 UTC m=+27.026783081" observedRunningTime="2025-11-15 10:38:50.079104151 +0000 UTC m=+27.538234982" watchObservedRunningTime="2025-11-15 10:38:59.106254582 +0000 UTC m=+36.565385413"
	Nov 15 10:39:02 default-k8s-diff-port-303164 kubelet[779]: I1115 10:39:02.138090     779 scope.go:117] "RemoveContainer" containerID="5cb75eb11bbd0b60da9e1d96609a3e36b9d59a6bbe55060fc6e322be02ff99ed"
	Nov 15 10:39:06 default-k8s-diff-port-303164 kubelet[779]: I1115 10:39:06.308578     779 scope.go:117] "RemoveContainer" containerID="cf76876a687d20a38bd839d035595084bd6d94c1f4dfe4203497cd9f62dfc593"
	Nov 15 10:39:06 default-k8s-diff-port-303164 kubelet[779]: E1115 10:39:06.309261     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lhct8_kubernetes-dashboard(6d9ff063-ba22-4965-aa68-699ee10b68f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lhct8" podUID="6d9ff063-ba22-4965-aa68-699ee10b68f9"
	Nov 15 10:39:18 default-k8s-diff-port-303164 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:39:18 default-k8s-diff-port-303164 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:39:18 default-k8s-diff-port-303164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2e643fed5aa284e7891d963b79c953d7c3d1f44044faa4dd0248eb955adca97f] <==
	2025/11/15 10:38:49 Using namespace: kubernetes-dashboard
	2025/11/15 10:38:49 Using in-cluster config to connect to apiserver
	2025/11/15 10:38:49 Using secret token for csrf signing
	2025/11/15 10:38:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:38:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:38:49 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:38:49 Generating JWE encryption key
	2025/11/15 10:38:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:38:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:38:50 Initializing JWE encryption key from synchronized object
	2025/11/15 10:38:50 Creating in-cluster Sidecar client
	2025/11/15 10:38:50 Serving insecurely on HTTP port: 9090
	2025/11/15 10:38:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:39:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:38:49 Starting overwatch
	
	
	==> storage-provisioner [5cb75eb11bbd0b60da9e1d96609a3e36b9d59a6bbe55060fc6e322be02ff99ed] <==
	I1115 10:38:31.847853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:39:01.850166       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7566f2a7beef6139c69752a15ff9d5a2875f4987bb7d5b3e4353bac2563ea7ce] <==
	I1115 10:39:02.210575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:39:02.223182       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:39:02.223242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:39:02.225442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:05.681567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:09.942144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:13.541245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:16.595115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:19.617448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:19.622328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:39:19.622463       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:39:19.622629       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303164_79107faa-ad3f-4883-8971-7cbfdae8f2f2!
	I1115 10:39:19.623501       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b6d7f6c5-5cd5-4e38-9b83-ceab25b500ef", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-303164_79107faa-ad3f-4883-8971-7cbfdae8f2f2 became leader
	W1115 10:39:19.629698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:19.635921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:39:19.722787       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-303164_79107faa-ad3f-4883-8971-7cbfdae8f2f2!
	W1115 10:39:21.639655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:21.644464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:23.652083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:39:23.659493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164: exit status 2 (361.903708ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-303164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.22s)
E1115 10:45:09.583296  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (261/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.25
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.85
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.15
18 TestDownloadOnly/v1.34.1/DeleteAll 0.33
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 161.16
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.87
48 TestAddons/StoppedEnableDisable 12.46
49 TestCertOptions 36.8
50 TestCertExpiration 255.58
52 TestForceSystemdFlag 44.28
53 TestForceSystemdEnv 44.16
58 TestErrorSpam/setup 31.46
59 TestErrorSpam/start 0.73
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 6.99
62 TestErrorSpam/unpause 5.15
63 TestErrorSpam/stop 1.49
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 81.94
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.42
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.63
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 49.5
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.53
86 TestFunctional/serial/LogsFileCmd 1.47
87 TestFunctional/serial/InvalidService 4.38
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 8.25
91 TestFunctional/parallel/DryRun 0.44
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.05
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 28.92
101 TestFunctional/parallel/SSHCmd 0.68
102 TestFunctional/parallel/CpCmd 2.46
104 TestFunctional/parallel/FileSync 0.39
105 TestFunctional/parallel/CertSync 2.17
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.81
113 TestFunctional/parallel/License 0.48
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.52
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.44
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/MountCmd/any-port 7.78
130 TestFunctional/parallel/MountCmd/specific-port 2.06
131 TestFunctional/parallel/MountCmd/VerifyCleanup 2.07
132 TestFunctional/parallel/ServiceCmd/List 1.47
133 TestFunctional/parallel/ServiceCmd/JSONOutput 1.43
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 1.03
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
144 TestFunctional/parallel/ImageCommands/Setup 0.65
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 209
163 TestMultiControlPlane/serial/DeployApp 7.79
164 TestMultiControlPlane/serial/PingHostFromPods 1.43
165 TestMultiControlPlane/serial/AddWorkerNode 60.43
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.16
168 TestMultiControlPlane/serial/CopyFile 19.82
169 TestMultiControlPlane/serial/StopSecondaryNode 12.89
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 31.21
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.24
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 115.99
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.9
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.21
177 TestMultiControlPlane/serial/RestartCluster 73.6
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
179 TestMultiControlPlane/serial/AddSecondaryNode 83.29
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.02
185 TestJSONOutput/start/Command 79.86
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 38.08
211 TestKicCustomNetwork/use_default_bridge_network 33.55
212 TestKicExistingNetwork 32.81
213 TestKicCustomSubnet 35.97
214 TestKicStaticIP 35.09
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 75.08
219 TestMountStart/serial/StartWithMountFirst 8.83
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 10.22
222 TestMountStart/serial/VerifyMountSecond 0.31
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 9.07
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 107.75
231 TestMultiNode/serial/DeployApp2Nodes 4.97
232 TestMultiNode/serial/PingHostFrom2Pods 0.94
233 TestMultiNode/serial/AddNode 58.47
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.51
237 TestMultiNode/serial/StopNode 2.41
238 TestMultiNode/serial/StartAfterStop 7.94
239 TestMultiNode/serial/RestartKeepsNodes 80.22
240 TestMultiNode/serial/DeleteNode 5.63
241 TestMultiNode/serial/StopMultiNode 24.04
242 TestMultiNode/serial/RestartMultiNode 53.66
243 TestMultiNode/serial/ValidateNameConflict 36.01
248 TestPreload 130.25
250 TestScheduledStopUnix 110.13
253 TestInsufficientStorage 13.94
254 TestRunningBinaryUpgrade 55.32
256 TestKubernetesUpgrade 368.79
257 TestMissingContainerUpgrade 125.47
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 46.53
261 TestNoKubernetes/serial/StartWithStopK8s 18.53
262 TestNoKubernetes/serial/Start 8.93
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
265 TestNoKubernetes/serial/ProfileList 0.7
266 TestNoKubernetes/serial/Stop 1.3
267 TestNoKubernetes/serial/StartNoArgs 6.87
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
269 TestStoppedBinaryUpgrade/Setup 0.68
270 TestStoppedBinaryUpgrade/Upgrade 58.95
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
280 TestPause/serial/Start 84.32
281 TestPause/serial/SecondStartNoReconfiguration 43.81
290 TestNetworkPlugins/group/false 4.01
295 TestStartStop/group/old-k8s-version/serial/FirstStart 60.97
296 TestStartStop/group/old-k8s-version/serial/DeployApp 8.46
298 TestStartStop/group/old-k8s-version/serial/Stop 12.19
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/old-k8s-version/serial/SecondStart 52.85
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
306 TestStartStop/group/no-preload/serial/FirstStart 75.48
308 TestStartStop/group/embed-certs/serial/FirstStart 86.98
309 TestStartStop/group/no-preload/serial/DeployApp 8.38
311 TestStartStop/group/no-preload/serial/Stop 12.02
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
313 TestStartStop/group/no-preload/serial/SecondStart 56.61
314 TestStartStop/group/embed-certs/serial/DeployApp 9.44
316 TestStartStop/group/embed-certs/serial/Stop 12.03
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
318 TestStartStop/group/embed-certs/serial/SecondStart 59.15
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.94
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.44
330 TestStartStop/group/newest-cni/serial/FirstStart 38.26
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.51
332 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/Stop 1.49
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
336 TestStartStop/group/newest-cni/serial/SecondStart 16.37
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.45
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.77
345 TestNetworkPlugins/group/auto/Start 85.98
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
350 TestNetworkPlugins/group/kindnet/Start 83.45
351 TestNetworkPlugins/group/auto/KubeletFlags 0.47
352 TestNetworkPlugins/group/auto/NetCatPod 12.4
353 TestNetworkPlugins/group/auto/DNS 0.18
354 TestNetworkPlugins/group/auto/Localhost 0.18
355 TestNetworkPlugins/group/auto/HairPin 0.2
356 TestNetworkPlugins/group/calico/Start 64.38
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.43
360 TestNetworkPlugins/group/kindnet/DNS 0.21
361 TestNetworkPlugins/group/kindnet/Localhost 0.16
362 TestNetworkPlugins/group/kindnet/HairPin 0.17
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.43
365 TestNetworkPlugins/group/calico/NetCatPod 11.36
366 TestNetworkPlugins/group/custom-flannel/Start 68.65
367 TestNetworkPlugins/group/calico/DNS 0.34
368 TestNetworkPlugins/group/calico/Localhost 0.36
369 TestNetworkPlugins/group/calico/HairPin 0.15
370 TestNetworkPlugins/group/enable-default-cni/Start 45.59
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.46
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.42
373 TestNetworkPlugins/group/custom-flannel/DNS 0.16
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
381 TestNetworkPlugins/group/flannel/Start 70.57
382 TestNetworkPlugins/group/bridge/Start 74.81
383 TestNetworkPlugins/group/flannel/ControllerPod 6
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
385 TestNetworkPlugins/group/flannel/NetCatPod 10.25
386 TestNetworkPlugins/group/flannel/DNS 0.16
387 TestNetworkPlugins/group/flannel/Localhost 0.15
388 TestNetworkPlugins/group/flannel/HairPin 0.2
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
390 TestNetworkPlugins/group/bridge/NetCatPod 10.44
391 TestNetworkPlugins/group/bridge/DNS 0.2
392 TestNetworkPlugins/group/bridge/Localhost 0.18
393 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (5.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-446723 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-446723 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.250061634s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1115 09:32:41.048368  516637 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1115 09:32:41.048457  516637 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-446723
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-446723: exit status 85 (80.781782ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-446723 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-446723 │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:32:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:32:35.843081  516642 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:32:35.843249  516642 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:32:35.843280  516642 out.go:374] Setting ErrFile to fd 2...
	I1115 09:32:35.843300  516642 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:32:35.843571  516642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	W1115 09:32:35.843731  516642 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21895-514793/.minikube/config/config.json: open /home/jenkins/minikube-integration/21895-514793/.minikube/config/config.json: no such file or directory
	I1115 09:32:35.844170  516642 out.go:368] Setting JSON to true
	I1115 09:32:35.845039  516642 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15307,"bootTime":1763183849,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 09:32:35.845139  516642 start.go:143] virtualization:  
	I1115 09:32:35.849067  516642 out.go:99] [download-only-446723] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1115 09:32:35.849256  516642 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball: no such file or directory
	I1115 09:32:35.849346  516642 notify.go:221] Checking for updates...
	I1115 09:32:35.852244  516642 out.go:171] MINIKUBE_LOCATION=21895
	I1115 09:32:35.855479  516642 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:32:35.858390  516642 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:32:35.861298  516642 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 09:32:35.864219  516642 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1115 09:32:35.869877  516642 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:32:35.870149  516642 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:32:35.892972  516642 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 09:32:35.893082  516642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:32:35.954828  516642 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-15 09:32:35.945927155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:32:35.954936  516642 docker.go:319] overlay module found
	I1115 09:32:35.957864  516642 out.go:99] Using the docker driver based on user configuration
	I1115 09:32:35.957907  516642 start.go:309] selected driver: docker
	I1115 09:32:35.957915  516642 start.go:930] validating driver "docker" against <nil>
	I1115 09:32:35.958036  516642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:32:36.014918  516642 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-15 09:32:36.004663291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:32:36.015073  516642 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:32:36.015391  516642 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1115 09:32:36.015554  516642 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:32:36.018730  516642 out.go:171] Using Docker driver with root privileges
	I1115 09:32:36.021885  516642 cni.go:84] Creating CNI manager for ""
	I1115 09:32:36.021967  516642 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:32:36.021985  516642 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:32:36.022075  516642 start.go:353] cluster config:
	{Name:download-only-446723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-446723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:32:36.025159  516642 out.go:99] Starting "download-only-446723" primary control-plane node in "download-only-446723" cluster
	I1115 09:32:36.025194  516642 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:32:36.028186  516642 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:32:36.028282  516642 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:32:36.028380  516642 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:32:36.044658  516642 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:32:36.044879  516642 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:32:36.044977  516642 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:32:36.083213  516642 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 09:32:36.083243  516642 cache.go:65] Caching tarball of preloaded images
	I1115 09:32:36.083399  516642 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:32:36.086664  516642 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1115 09:32:36.086694  516642 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1115 09:32:36.173908  516642 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1115 09:32:36.174082  516642 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1115 09:32:39.175665  516642 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1115 09:32:39.176031  516642 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/download-only-446723/config.json ...
	I1115 09:32:39.176067  516642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/download-only-446723/config.json: {Name:mk017417fb8ac0c34a8bcf65d80a1890423db44a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:32:39.176227  516642 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:32:39.176440  516642 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21895-514793/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-446723 host does not exist
	  To start a cluster, run: "minikube start -p download-only-446723"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-446723
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-409645 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-409645 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.851571413s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1115 09:32:45.328363  516637 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1115 09:32:45.328399  516637 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-409645
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-409645: exit status 85 (149.586469ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-446723 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-446723 │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ delete  │ -p download-only-446723                                                                                                                                                   │ download-only-446723 │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │ 15 Nov 25 09:32 UTC │
	│ start   │ -o=json --download-only -p download-only-409645 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-409645 │ jenkins │ v1.37.0 │ 15 Nov 25 09:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:32:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:32:41.520112  516842 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:32:41.520306  516842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:32:41.520339  516842 out.go:374] Setting ErrFile to fd 2...
	I1115 09:32:41.520359  516842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:32:41.520653  516842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:32:41.521085  516842 out.go:368] Setting JSON to true
	I1115 09:32:41.521971  516842 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15313,"bootTime":1763183849,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 09:32:41.522067  516842 start.go:143] virtualization:  
	I1115 09:32:41.525308  516842 out.go:99] [download-only-409645] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 09:32:41.525588  516842 notify.go:221] Checking for updates...
	I1115 09:32:41.529412  516842 out.go:171] MINIKUBE_LOCATION=21895
	I1115 09:32:41.532516  516842 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:32:41.535396  516842 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:32:41.538500  516842 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 09:32:41.541387  516842 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1115 09:32:41.546972  516842 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:32:41.547263  516842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:32:41.571136  516842 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 09:32:41.571245  516842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:32:41.629820  516842 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-15 09:32:41.6207287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:32:41.629924  516842 docker.go:319] overlay module found
	I1115 09:32:41.632903  516842 out.go:99] Using the docker driver based on user configuration
	I1115 09:32:41.632931  516842 start.go:309] selected driver: docker
	I1115 09:32:41.632939  516842 start.go:930] validating driver "docker" against <nil>
	I1115 09:32:41.633053  516842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:32:41.696898  516842 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-15 09:32:41.687896315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:32:41.697059  516842 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:32:41.697349  516842 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1115 09:32:41.697504  516842 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:32:41.700617  516842 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-409645 host does not exist
	  To start a cluster, run: "minikube start -p download-only-409645"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-409645
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1115 09:32:47.198788  516637 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-339675 --alsologtostderr --binary-mirror http://127.0.0.1:41649 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-339675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-339675
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-612806
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-612806: exit status 85 (72.284164ms)

                                                
                                                
-- stdout --
	* Profile "addons-612806" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-612806"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-612806
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-612806: exit status 85 (78.376636ms)

                                                
                                                
-- stdout --
	* Profile "addons-612806" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-612806"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (161.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-612806 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-612806 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m41.161990835s)
--- PASS: TestAddons/Setup (161.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-612806 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-612806 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-612806 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-612806 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [534ea648-3674-422e-81eb-92e52637c9e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [534ea648-3674-422e-81eb-92e52637c9e8] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003871848s
addons_test.go:694: (dbg) Run:  kubectl --context addons-612806 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-612806 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-612806 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-612806 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-612806
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-612806: (12.172138613s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-612806
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-612806
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-612806
--- PASS: TestAddons/StoppedEnableDisable (12.46s)

                                                
                                    
x
+
TestCertOptions (36.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-115480 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1115 10:30:12.893716  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-115480 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.853667081s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-115480 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-115480 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-115480 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-115480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-115480
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-115480: (2.196096774s)
--- PASS: TestCertOptions (36.80s)

                                                
                                    
x
+
TestCertExpiration (255.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-845026 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-845026 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.455853996s)
E1115 10:30:29.828194  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-845026 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-845026 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (31.23722312s)
helpers_test.go:175: Cleaning up "cert-expiration-845026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-845026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-845026: (2.884304787s)
--- PASS: TestCertExpiration (255.58s)

                                                
                                    
x
+
TestForceSystemdFlag (44.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-106884 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-106884 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.057851912s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-106884 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-106884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-106884
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-106884: (2.678507532s)
--- PASS: TestForceSystemdFlag (44.28s)

                                                
                                    
x
+
TestForceSystemdEnv (44.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-683299 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-683299 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.411550066s)
helpers_test.go:175: Cleaning up "force-systemd-env-683299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-683299
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-683299: (2.748620616s)
--- PASS: TestForceSystemdEnv (44.16s)

                                                
                                    
x
+
TestErrorSpam/setup (31.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-215162 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-215162 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-215162 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-215162 --driver=docker  --container-runtime=crio: (31.461017411s)
--- PASS: TestErrorSpam/setup (31.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (6.99s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 pause: exit status 80 (2.339935418s)

                                                
                                                
-- stdout --
	* Pausing node nospam-215162 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:39:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 pause: exit status 80 (2.396373205s)

                                                
                                                
-- stdout --
	* Pausing node nospam-215162 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:39:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 pause: exit status 80 (2.255320608s)

                                                
                                                
-- stdout --
	* Pausing node nospam-215162 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:39:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.99s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.15s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 unpause: exit status 80 (1.678403309s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-215162 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:39:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 unpause: exit status 80 (1.934745545s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-215162 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:39:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 unpause: exit status 80 (1.536630232s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-215162 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:39:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.15s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 stop: (1.311160262s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-215162 --log_dir /tmp/nospam-215162 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21895-514793/.minikube/files/etc/test/nested/copy/516637/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-755106 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1115 09:40:29.825906  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:29.832384  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:29.843822  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:29.865256  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:29.906688  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:29.988197  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:30.149729  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:30.471490  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:31.112914  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:32.394865  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:34.957211  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:40.079211  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:40:50.321319  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:41:10.802674  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-755106 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.944518736s)
--- PASS: TestFunctional/serial/StartWithProxy (81.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.42s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1115 09:41:14.337668  516637 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-755106 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-755106 --alsologtostderr -v=8: (29.418665107s)
functional_test.go:678: soft start took 29.419185665s for "functional-755106" cluster.
I1115 09:41:43.756617  516637 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.42s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-755106 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 cache add registry.k8s.io/pause:3.1: (1.243629963s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 cache add registry.k8s.io/pause:3.3: (1.257539047s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 cache add registry.k8s.io/pause:latest: (1.124015551s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-755106 /tmp/TestFunctionalserialCacheCmdcacheadd_local65108814/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 cache add minikube-local-cache-test:functional-755106
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 cache delete minikube-local-cache-test:functional-755106
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-755106
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.056225ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 kubectl -- --context functional-755106 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-755106 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-755106 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1115 09:41:51.764709  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-755106 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.500579078s)
functional_test.go:776: restart took 49.500672606s for "functional-755106" cluster.
I1115 09:42:40.775309  516637 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (49.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-755106 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 logs: (1.533593485s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 logs --file /tmp/TestFunctionalserialLogsFileCmd68522754/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 logs --file /tmp/TestFunctionalserialLogsFileCmd68522754/001/logs.txt: (1.463524358s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-755106 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-755106
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-755106: exit status 115 (403.220021ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32287 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-755106 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 config get cpus: exit status 14 (93.605272ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 config get cpus: exit status 14 (83.721696ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-755106 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-755106 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 543069: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-755106 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-755106 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.876413ms)

                                                
                                                
-- stdout --
	* [functional-755106] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:53:17.253570  542818 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:53:17.253785  542818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:17.253816  542818 out.go:374] Setting ErrFile to fd 2...
	I1115 09:53:17.253836  542818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:17.254102  542818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:53:17.254498  542818 out.go:368] Setting JSON to false
	I1115 09:53:17.255427  542818 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16549,"bootTime":1763183849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 09:53:17.255531  542818 start.go:143] virtualization:  
	I1115 09:53:17.258877  542818 out.go:179] * [functional-755106] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 09:53:17.261890  542818 notify.go:221] Checking for updates...
	I1115 09:53:17.262750  542818 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:53:17.265722  542818 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:53:17.268603  542818 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:53:17.271521  542818 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 09:53:17.274405  542818 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 09:53:17.277249  542818 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:53:17.280538  542818 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:53:17.281710  542818 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:53:17.309741  542818 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 09:53:17.309866  542818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:53:17.373896  542818 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 09:53:17.364209406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:53:17.374017  542818 docker.go:319] overlay module found
	I1115 09:53:17.377110  542818 out.go:179] * Using the docker driver based on existing profile
	I1115 09:53:17.379962  542818 start.go:309] selected driver: docker
	I1115 09:53:17.379984  542818 start.go:930] validating driver "docker" against &{Name:functional-755106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-755106 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:53:17.380091  542818 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:53:17.383661  542818 out.go:203] 
	W1115 09:53:17.386470  542818 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1115 09:53:17.389249  542818 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-755106 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-755106 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-755106 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (207.193948ms)

                                                
                                                
-- stdout --
	* [functional-755106] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:53:17.056337  542770 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:53:17.056479  542770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:17.056492  542770 out.go:374] Setting ErrFile to fd 2...
	I1115 09:53:17.056512  542770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:17.056948  542770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:53:17.057429  542770 out.go:368] Setting JSON to false
	I1115 09:53:17.058478  542770 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16548,"bootTime":1763183849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 09:53:17.058544  542770 start.go:143] virtualization:  
	I1115 09:53:17.062183  542770 out.go:179] * [functional-755106] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1115 09:53:17.065187  542770 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:53:17.065230  542770 notify.go:221] Checking for updates...
	I1115 09:53:17.070959  542770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:53:17.073716  542770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 09:53:17.076548  542770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 09:53:17.079384  542770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 09:53:17.082275  542770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:53:17.085688  542770 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:53:17.086289  542770 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:53:17.114324  542770 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 09:53:17.114442  542770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:53:17.182625  542770 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-15 09:53:17.167235259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:53:17.182734  542770 docker.go:319] overlay module found
	I1115 09:53:17.185818  542770 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1115 09:53:17.189365  542770 start.go:309] selected driver: docker
	I1115 09:53:17.189387  542770 start.go:930] validating driver "docker" against &{Name:functional-755106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-755106 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:53:17.189494  542770 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:53:17.192983  542770 out.go:203] 
	W1115 09:53:17.195808  542770 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 09:53:17.198714  542770 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f9c7c0f2-96cd-4980-89f2-400915c56162] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003504737s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-755106 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-755106 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-755106 get pvc myclaim -o=json
I1115 09:42:56.044710  516637 retry.go:31] will retry after 2.920433254s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:c53f315c-6b88-4472-b165-dedcd2c543a7 ResourceVersion:691 Generation:0 CreationTimestamp:2025-11-15 09:42:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x4000c612c0 VolumeMode:0x4000c612d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-755106 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-755106 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-755106 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f7fd6e9c-76a2-4a41-b6ac-f2d1b774581d] Pending
helpers_test.go:352: "sp-pod" [f7fd6e9c-76a2-4a41-b6ac-f2d1b774581d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f7fd6e9c-76a2-4a41-b6ac-f2d1b774581d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003606738s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-755106 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-755106 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-755106 delete -f testdata/storage-provisioner/pod.yaml: (1.260946266s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-755106 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [70bf6f63-bc41-461e-b57c-15fdc50decbf] Pending
E1115 09:43:13.686339  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [70bf6f63-bc41-461e-b57c-15fdc50decbf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003426006s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-755106 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh -n functional-755106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 cp functional-755106:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2174683691/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh -n functional-755106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh -n functional-755106 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/516637/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo cat /etc/test/nested/copy/516637/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/516637.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo cat /etc/ssl/certs/516637.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/516637.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo cat /usr/share/ca-certificates/516637.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5166372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo cat /etc/ssl/certs/5166372.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5166372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo cat /usr/share/ca-certificates/5166372.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-755106 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 ssh "sudo systemctl is-active docker": exit status 1 (395.123647ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 ssh "sudo systemctl is-active containerd": exit status 1 (416.378753ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-755106 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-755106 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-755106 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 539505: os: process already finished
helpers_test.go:519: unable to terminate pid 539300: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-755106 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-755106 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-755106 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [9f89c801-b4f7-470a-93e6-a61fdb902496] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [9f89c801-b4f7-470a-93e6-a61fdb902496] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003298876s
I1115 09:42:58.588248  516637 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-755106 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.35.248 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-755106 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "377.141241ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "59.291808ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "384.177938ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "60.341184ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdany-port3350836996/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763200384036444451" to /tmp/TestFunctionalparallelMountCmdany-port3350836996/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763200384036444451" to /tmp/TestFunctionalparallelMountCmdany-port3350836996/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763200384036444451" to /tmp/TestFunctionalparallelMountCmdany-port3350836996/001/test-1763200384036444451
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.230282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:53:04.389948  516637 retry.go:31] will retry after 391.287766ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 15 09:53 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 15 09:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 15 09:53 test-1763200384036444451
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh cat /mount-9p/test-1763200384036444451
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-755106 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c1bb83d8-4a6a-4ddc-a2b5-41ed0fa08317] Pending
helpers_test.go:352: "busybox-mount" [c1bb83d8-4a6a-4ddc-a2b5-41ed0fa08317] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c1bb83d8-4a6a-4ddc-a2b5-41ed0fa08317] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c1bb83d8-4a6a-4ddc-a2b5-41ed0fa08317] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003256463s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-755106 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdany-port3350836996/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdspecific-port1608664980/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.834701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:53:12.161700  516637 retry.go:31] will retry after 687.43047ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdspecific-port1608664980/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 ssh "sudo umount -f /mount-9p": exit status 1 (274.991127ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-755106 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdspecific-port1608664980/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1093730878/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1093730878/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1093730878/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T" /mount1: exit status 1 (570.399268ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:53:14.454281  516637 retry.go:31] will retry after 599.090382ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-755106 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1093730878/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1093730878/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-755106 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1093730878/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 service list: (1.466849195s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 service list -o json: (1.429548346s)
functional_test.go:1504: Took "1.429624405s" to run "out/minikube-linux-arm64 -p functional-755106 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 version -o=json --components: (1.029767518s)
--- PASS: TestFunctional/parallel/Version/components (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-755106 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-755106 image ls --format short --alsologtostderr:
I1115 09:53:33.245749  545563 out.go:360] Setting OutFile to fd 1 ...
I1115 09:53:33.245876  545563 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.245886  545563 out.go:374] Setting ErrFile to fd 2...
I1115 09:53:33.245892  545563 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.246243  545563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
I1115 09:53:33.247137  545563 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.247262  545563 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.247924  545563 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
I1115 09:53:33.272108  545563 ssh_runner.go:195] Run: systemctl --version
I1115 09:53:33.272162  545563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
I1115 09:53:33.297681  545563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
I1115 09:53:33.408268  545563 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-755106 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ latest             │ 2d5a8f08b76da │ 176MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-755106 image ls --format table --alsologtostderr:
I1115 09:53:33.521734  545632 out.go:360] Setting OutFile to fd 1 ...
I1115 09:53:33.521845  545632 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.521901  545632 out.go:374] Setting ErrFile to fd 2...
I1115 09:53:33.521906  545632 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.522231  545632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
I1115 09:53:33.523167  545632 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.523324  545632 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.523874  545632 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
I1115 09:53:33.570854  545632 ssh_runner.go:195] Run: systemctl --version
I1115 09:53:33.570912  545632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
I1115 09:53:33.593341  545632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
I1115 09:53:33.708473  545632 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-755106 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":[
"registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner
:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c
9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docke
r.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f93837913
3ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33"],"repoTags":["docker.io/library/nginx:latest"],"size":"176006678"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-755106 image ls --format json --alsologtostderr:
I1115 09:53:33.256788  545567 out.go:360] Setting OutFile to fd 1 ...
I1115 09:53:33.257024  545567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.257038  545567 out.go:374] Setting ErrFile to fd 2...
I1115 09:53:33.257043  545567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.257353  545567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
I1115 09:53:33.258018  545567 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.258205  545567 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.258782  545567 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
I1115 09:53:33.283518  545567 ssh_runner.go:195] Run: systemctl --version
I1115 09:53:33.283596  545567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
I1115 09:53:33.314429  545567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
I1115 09:53:33.426641  545567 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-755106 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33
repoTags:
- docker.io/library/nginx:latest
size: "176006678"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-755106 image ls --format yaml --alsologtostderr:
I1115 09:53:33.820261  545730 out.go:360] Setting OutFile to fd 1 ...
I1115 09:53:33.820413  545730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.820420  545730 out.go:374] Setting ErrFile to fd 2...
I1115 09:53:33.820424  545730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.820818  545730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
I1115 09:53:33.821955  545730 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.822098  545730 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.822775  545730 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
I1115 09:53:33.848484  545730 ssh_runner.go:195] Run: systemctl --version
I1115 09:53:33.848536  545730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
I1115 09:53:33.868914  545730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
I1115 09:53:33.980632  545730 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-755106 ssh pgrep buildkitd: exit status 1 (350.425479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image build -t localhost/my-image:functional-755106 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-755106 image build -t localhost/my-image:functional-755106 testdata/build --alsologtostderr: (3.384804249s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-755106 image build -t localhost/my-image:functional-755106 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e79a217f6ff
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-755106
--> be3220dcfd0
Successfully tagged localhost/my-image:functional-755106
be3220dcfd0c4231dc2c55907252d1e8153c348145581fa0d193ad98cceaee69
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-755106 image build -t localhost/my-image:functional-755106 testdata/build --alsologtostderr:
I1115 09:53:33.906809  545747 out.go:360] Setting OutFile to fd 1 ...
I1115 09:53:33.907841  545747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.907863  545747 out.go:374] Setting ErrFile to fd 2...
I1115 09:53:33.907869  545747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:53:33.908227  545747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
I1115 09:53:33.908967  545747 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.909850  545747 config.go:182] Loaded profile config "functional-755106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:53:33.910356  545747 cli_runner.go:164] Run: docker container inspect functional-755106 --format={{.State.Status}}
I1115 09:53:33.935257  545747 ssh_runner.go:195] Run: systemctl --version
I1115 09:53:33.935309  545747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-755106
I1115 09:53:33.953467  545747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/functional-755106/id_rsa Username:docker}
I1115 09:53:34.076520  545747 build_images.go:162] Building image from path: /tmp/build.3618341285.tar
I1115 09:53:34.076600  545747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1115 09:53:34.084554  545747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3618341285.tar
I1115 09:53:34.088113  545747 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3618341285.tar: stat -c "%s %y" /var/lib/minikube/build/build.3618341285.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3618341285.tar': No such file or directory
I1115 09:53:34.088144  545747 ssh_runner.go:362] scp /tmp/build.3618341285.tar --> /var/lib/minikube/build/build.3618341285.tar (3072 bytes)
I1115 09:53:34.106664  545747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3618341285
I1115 09:53:34.114321  545747 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3618341285 -xf /var/lib/minikube/build/build.3618341285.tar
I1115 09:53:34.122332  545747 crio.go:315] Building image: /var/lib/minikube/build/build.3618341285
I1115 09:53:34.122421  545747 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-755106 /var/lib/minikube/build/build.3618341285 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1115 09:53:37.196319  545747 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-755106 /var/lib/minikube/build/build.3618341285 --cgroup-manager=cgroupfs: (3.073859589s)
I1115 09:53:37.196380  545747 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3618341285
I1115 09:53:37.204387  545747 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3618341285.tar
I1115 09:53:37.211824  545747 build_images.go:218] Built localhost/my-image:functional-755106 from /tmp/build.3618341285.tar
I1115 09:53:37.211856  545747 build_images.go:134] succeeded building to: functional-755106
I1115 09:53:37.211861  545747 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-755106
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image rm kicbase/echo-server:functional-755106 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-755106 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-755106
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-755106
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-755106
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1115 09:55:29.824379  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:56:52.889777  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m28.137069439s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (209.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 kubectl -- rollout status deployment/busybox: (5.171685896s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-99xkg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-jt24m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-x82gb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-99xkg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-jt24m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-x82gb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-99xkg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-jt24m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-x82gb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-99xkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-99xkg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-jt24m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-jt24m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-x82gb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 kubectl -- exec busybox-7b57f96db7-x82gb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 node add --alsologtostderr -v 5
E1115 09:57:50.067902  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:50.075224  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:50.086620  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:50.108014  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:50.149499  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:50.231059  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:50.392496  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:50.714182  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:51.356211  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:52.637483  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:57:55.198908  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:58:00.323643  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:58:10.565456  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 node add --alsologtostderr -v 5: (59.34750985s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5: (1.084476089s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-563025 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.160037451s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 status --output json --alsologtostderr -v 5: (1.073006282s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp testdata/cp-test.txt ha-563025:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3138265423/001/cp-test_ha-563025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025:/home/docker/cp-test.txt ha-563025-m02:/home/docker/cp-test_ha-563025_ha-563025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m02 "sudo cat /home/docker/cp-test_ha-563025_ha-563025-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025:/home/docker/cp-test.txt ha-563025-m03:/home/docker/cp-test_ha-563025_ha-563025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m03 "sudo cat /home/docker/cp-test_ha-563025_ha-563025-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025:/home/docker/cp-test.txt ha-563025-m04:/home/docker/cp-test_ha-563025_ha-563025-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m04 "sudo cat /home/docker/cp-test_ha-563025_ha-563025-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp testdata/cp-test.txt ha-563025-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3138265423/001/cp-test_ha-563025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m02:/home/docker/cp-test.txt ha-563025:/home/docker/cp-test_ha-563025-m02_ha-563025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025 "sudo cat /home/docker/cp-test_ha-563025-m02_ha-563025.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m02:/home/docker/cp-test.txt ha-563025-m03:/home/docker/cp-test_ha-563025-m02_ha-563025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m03 "sudo cat /home/docker/cp-test_ha-563025-m02_ha-563025-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m02:/home/docker/cp-test.txt ha-563025-m04:/home/docker/cp-test_ha-563025-m02_ha-563025-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m04 "sudo cat /home/docker/cp-test_ha-563025-m02_ha-563025-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp testdata/cp-test.txt ha-563025-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m03 "sudo cat /home/docker/cp-test.txt"
E1115 09:58:31.046726  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3138265423/001/cp-test_ha-563025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m03:/home/docker/cp-test.txt ha-563025:/home/docker/cp-test_ha-563025-m03_ha-563025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025 "sudo cat /home/docker/cp-test_ha-563025-m03_ha-563025.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m03:/home/docker/cp-test.txt ha-563025-m02:/home/docker/cp-test_ha-563025-m03_ha-563025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m02 "sudo cat /home/docker/cp-test_ha-563025-m03_ha-563025-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m03:/home/docker/cp-test.txt ha-563025-m04:/home/docker/cp-test_ha-563025-m03_ha-563025-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m04 "sudo cat /home/docker/cp-test_ha-563025-m03_ha-563025-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp testdata/cp-test.txt ha-563025-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3138265423/001/cp-test_ha-563025-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m04:/home/docker/cp-test.txt ha-563025:/home/docker/cp-test_ha-563025-m04_ha-563025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025 "sudo cat /home/docker/cp-test_ha-563025-m04_ha-563025.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m04:/home/docker/cp-test.txt ha-563025-m02:/home/docker/cp-test_ha-563025-m04_ha-563025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m02 "sudo cat /home/docker/cp-test_ha-563025-m04_ha-563025-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 cp ha-563025-m04:/home/docker/cp-test.txt ha-563025-m03:/home/docker/cp-test_ha-563025-m04_ha-563025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 ssh -n ha-563025-m03 "sudo cat /home/docker/cp-test_ha-563025-m04_ha-563025-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 node stop m02 --alsologtostderr -v 5: (12.099127949s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5: exit status 7 (788.031936ms)

                                                
                                                
-- stdout --
	ha-563025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-563025-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-563025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-563025-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:58:52.188319  560563 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:58:52.188508  560563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:52.188540  560563 out.go:374] Setting ErrFile to fd 2...
	I1115 09:58:52.188564  560563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:52.189030  560563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 09:58:52.189309  560563 out.go:368] Setting JSON to false
	I1115 09:58:52.189371  560563 mustload.go:66] Loading cluster: ha-563025
	I1115 09:58:52.190306  560563 notify.go:221] Checking for updates...
	I1115 09:58:52.190641  560563 config.go:182] Loaded profile config "ha-563025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:58:52.190680  560563 status.go:174] checking status of ha-563025 ...
	I1115 09:58:52.191324  560563 cli_runner.go:164] Run: docker container inspect ha-563025 --format={{.State.Status}}
	I1115 09:58:52.213062  560563 status.go:371] ha-563025 host status = "Running" (err=<nil>)
	I1115 09:58:52.213087  560563 host.go:66] Checking if "ha-563025" exists ...
	I1115 09:58:52.213377  560563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-563025
	I1115 09:58:52.242495  560563 host.go:66] Checking if "ha-563025" exists ...
	I1115 09:58:52.242904  560563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:58:52.242951  560563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-563025
	I1115 09:58:52.264325  560563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/ha-563025/id_rsa Username:docker}
	I1115 09:58:52.367063  560563 ssh_runner.go:195] Run: systemctl --version
	I1115 09:58:52.374975  560563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:58:52.388236  560563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:58:52.449235  560563 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-15 09:58:52.440380621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 09:58:52.449829  560563 kubeconfig.go:125] found "ha-563025" server: "https://192.168.49.254:8443"
	I1115 09:58:52.449865  560563 api_server.go:166] Checking apiserver status ...
	I1115 09:58:52.449909  560563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:58:52.461990  560563 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup
	I1115 09:58:52.470417  560563 api_server.go:182] apiserver freezer: "5:freezer:/docker/eaf1e571a250da42640e1a38b7e50d5cce19d24d3de3ce4972c902451edc2c51/crio/crio-37db65d5b73fea04ce7b535f81e42232762df724161885b0a72f06cbae65a1b3"
	I1115 09:58:52.470485  560563 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/eaf1e571a250da42640e1a38b7e50d5cce19d24d3de3ce4972c902451edc2c51/crio/crio-37db65d5b73fea04ce7b535f81e42232762df724161885b0a72f06cbae65a1b3/freezer.state
	I1115 09:58:52.477787  560563 api_server.go:204] freezer state: "THAWED"
	I1115 09:58:52.477819  560563 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 09:58:52.486237  560563 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 09:58:52.486262  560563 status.go:463] ha-563025 apiserver status = Running (err=<nil>)
	I1115 09:58:52.486273  560563 status.go:176] ha-563025 status: &{Name:ha-563025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:58:52.486290  560563 status.go:174] checking status of ha-563025-m02 ...
	I1115 09:58:52.486589  560563 cli_runner.go:164] Run: docker container inspect ha-563025-m02 --format={{.State.Status}}
	I1115 09:58:52.506421  560563 status.go:371] ha-563025-m02 host status = "Stopped" (err=<nil>)
	I1115 09:58:52.506446  560563 status.go:384] host is not running, skipping remaining checks
	I1115 09:58:52.506453  560563 status.go:176] ha-563025-m02 status: &{Name:ha-563025-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:58:52.506472  560563 status.go:174] checking status of ha-563025-m03 ...
	I1115 09:58:52.506796  560563 cli_runner.go:164] Run: docker container inspect ha-563025-m03 --format={{.State.Status}}
	I1115 09:58:52.523350  560563 status.go:371] ha-563025-m03 host status = "Running" (err=<nil>)
	I1115 09:58:52.523377  560563 host.go:66] Checking if "ha-563025-m03" exists ...
	I1115 09:58:52.523682  560563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-563025-m03
	I1115 09:58:52.540966  560563 host.go:66] Checking if "ha-563025-m03" exists ...
	I1115 09:58:52.541291  560563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:58:52.541338  560563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-563025-m03
	I1115 09:58:52.564931  560563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/ha-563025-m03/id_rsa Username:docker}
	I1115 09:58:52.670888  560563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:58:52.685438  560563 kubeconfig.go:125] found "ha-563025" server: "https://192.168.49.254:8443"
	I1115 09:58:52.685469  560563 api_server.go:166] Checking apiserver status ...
	I1115 09:58:52.685510  560563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:58:52.696676  560563 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	I1115 09:58:52.706058  560563 api_server.go:182] apiserver freezer: "5:freezer:/docker/25393aafab75846cd6120ec9c45e13fcd8073bb974a7183bf2302cc55c44d757/crio/crio-015c5bcd2c793270032cf3de9f9f1b316b92bc47cdebb78637364e4aa6d2e0e9"
	I1115 09:58:52.706174  560563 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/25393aafab75846cd6120ec9c45e13fcd8073bb974a7183bf2302cc55c44d757/crio/crio-015c5bcd2c793270032cf3de9f9f1b316b92bc47cdebb78637364e4aa6d2e0e9/freezer.state
	I1115 09:58:52.715341  560563 api_server.go:204] freezer state: "THAWED"
	I1115 09:58:52.715372  560563 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 09:58:52.724050  560563 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 09:58:52.724077  560563 status.go:463] ha-563025-m03 apiserver status = Running (err=<nil>)
	I1115 09:58:52.724086  560563 status.go:176] ha-563025-m03 status: &{Name:ha-563025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:58:52.724103  560563 status.go:174] checking status of ha-563025-m04 ...
	I1115 09:58:52.724403  560563 cli_runner.go:164] Run: docker container inspect ha-563025-m04 --format={{.State.Status}}
	I1115 09:58:52.745358  560563 status.go:371] ha-563025-m04 host status = "Running" (err=<nil>)
	I1115 09:58:52.745382  560563 host.go:66] Checking if "ha-563025-m04" exists ...
	I1115 09:58:52.745962  560563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-563025-m04
	I1115 09:58:52.772854  560563 host.go:66] Checking if "ha-563025-m04" exists ...
	I1115 09:58:52.773166  560563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:58:52.773215  560563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-563025-m04
	I1115 09:58:52.796168  560563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/ha-563025-m04/id_rsa Username:docker}
	I1115 09:58:52.902862  560563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:58:52.915738  560563 status.go:176] ha-563025-m04 status: &{Name:ha-563025-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 node start m02 --alsologtostderr -v 5
E1115 09:59:12.008004  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 node start m02 --alsologtostderr -v 5: (29.818135382s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5: (1.267300251s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.236256502s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 stop --alsologtostderr -v 5: (27.334731501s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 start --wait true --alsologtostderr -v 5
E1115 10:00:29.825723  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:00:33.929702  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 start --wait true --alsologtostderr -v 5: (1m28.476706183s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 node delete m03 --alsologtostderr -v 5: (10.938942733s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 stop --alsologtostderr -v 5: (36.094645308s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5: exit status 7 (116.14997ms)

                                                
                                                
-- stdout --
	ha-563025
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-563025-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-563025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:02:11.029372  572229 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:02:11.029489  572229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:11.029500  572229 out.go:374] Setting ErrFile to fd 2...
	I1115 10:02:11.029504  572229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:11.029876  572229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:02:11.030972  572229 out.go:368] Setting JSON to false
	I1115 10:02:11.031010  572229 mustload.go:66] Loading cluster: ha-563025
	I1115 10:02:11.031066  572229 notify.go:221] Checking for updates...
	I1115 10:02:11.031429  572229 config.go:182] Loaded profile config "ha-563025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:11.031449  572229 status.go:174] checking status of ha-563025 ...
	I1115 10:02:11.032284  572229 cli_runner.go:164] Run: docker container inspect ha-563025 --format={{.State.Status}}
	I1115 10:02:11.050890  572229 status.go:371] ha-563025 host status = "Stopped" (err=<nil>)
	I1115 10:02:11.050914  572229 status.go:384] host is not running, skipping remaining checks
	I1115 10:02:11.050921  572229 status.go:176] ha-563025 status: &{Name:ha-563025 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:02:11.050952  572229 status.go:174] checking status of ha-563025-m02 ...
	I1115 10:02:11.051263  572229 cli_runner.go:164] Run: docker container inspect ha-563025-m02 --format={{.State.Status}}
	I1115 10:02:11.073927  572229 status.go:371] ha-563025-m02 host status = "Stopped" (err=<nil>)
	I1115 10:02:11.073954  572229 status.go:384] host is not running, skipping remaining checks
	I1115 10:02:11.073972  572229 status.go:176] ha-563025-m02 status: &{Name:ha-563025-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:02:11.073992  572229 status.go:174] checking status of ha-563025-m04 ...
	I1115 10:02:11.074278  572229 cli_runner.go:164] Run: docker container inspect ha-563025-m04 --format={{.State.Status}}
	I1115 10:02:11.092133  572229 status.go:371] ha-563025-m04 host status = "Stopped" (err=<nil>)
	I1115 10:02:11.092152  572229 status.go:384] host is not running, skipping remaining checks
	I1115 10:02:11.092158  572229 status.go:176] ha-563025-m04 status: &{Name:ha-563025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (73.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1115 10:02:50.067950  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:03:17.771265  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m12.595024405s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (73.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 node add --control-plane --alsologtostderr -v 5: (1m22.215432293s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-563025 status --alsologtostderr -v 5: (1.07541852s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.023324249s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-187814 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1115 10:05:29.825173  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-187814 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.859439908s)
--- PASS: TestJSONOutput/start/Command (79.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-187814 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-187814 --output=json --user=testUser: (5.864145569s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-829235 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-829235 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.732154ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dfb291b6-36c1-4de9-849a-c48a3d44d786","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-829235] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"419e28e8-251b-4c64-ad71-79c546defeab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21895"}}
	{"specversion":"1.0","id":"d105d0f5-9062-413c-bf1a-baf3b1a534b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b74a8c37-f94f-4cdb-9974-df7f271d9c0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig"}}
	{"specversion":"1.0","id":"fc0d6b11-27ad-4aa0-80de-12e7afbc4fd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube"}}
	{"specversion":"1.0","id":"15ee431f-babd-4ee2-ba3b-cf9a01caf436","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5a9347a5-827d-4ea7-9c97-5461b082a2c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"99cdacbe-daca-4439-9c9a-8bb31e6df6fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-829235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-829235
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-355524 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-355524 --network=: (35.883136253s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-355524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-355524
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-355524: (2.176064562s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-288562 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-288562 --network=bridge: (31.438888386s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-288562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-288562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-288562: (2.086191844s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.55s)

                                                
                                    
x
+
TestKicExistingNetwork (32.81s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1115 10:07:43.701269  516637 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1115 10:07:43.716613  516637 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1115 10:07:43.717466  516637 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1115 10:07:43.717501  516637 cli_runner.go:164] Run: docker network inspect existing-network
W1115 10:07:43.733930  516637 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1115 10:07:43.733961  516637 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1115 10:07:43.733979  516637 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1115 10:07:43.734085  516637 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1115 10:07:43.752155  516637 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-03fcaf6cb6bf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:21:e0:cf:fc:c1} reservation:<nil>}
I1115 10:07:43.752516  516637 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016c8f50}
I1115 10:07:43.752544  516637 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1115 10:07:43.752600  516637 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1115 10:07:43.807097  516637 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-130322 --network=existing-network
E1115 10:07:50.068210  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-130322 --network=existing-network: (30.637371037s)
helpers_test.go:175: Cleaning up "existing-network-130322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-130322
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-130322: (2.029987804s)
I1115 10:08:16.491386  516637 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.81s)

                                                
                                    
x
+
TestKicCustomSubnet (35.97s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-075793 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-075793 --subnet=192.168.60.0/24: (33.682785812s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-075793 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-075793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-075793
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-075793: (2.254348449s)
--- PASS: TestKicCustomSubnet (35.97s)

                                                
                                    
x
+
TestKicStaticIP (35.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-906393 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-906393 --static-ip=192.168.200.200: (32.767619176s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-906393 ip
helpers_test.go:175: Cleaning up "static-ip-906393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-906393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-906393: (2.167750861s)
--- PASS: TestKicStaticIP (35.09s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-109593 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-109593 --driver=docker  --container-runtime=crio: (32.810142077s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-112153 --driver=docker  --container-runtime=crio
E1115 10:10:29.831767  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-112153 --driver=docker  --container-runtime=crio: (36.579867909s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-109593
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-112153
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-112153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-112153
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-112153: (2.118842717s)
helpers_test.go:175: Cleaning up "first-109593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-109593
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-109593: (2.116068893s)
--- PASS: TestMinikubeProfile (75.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-024767 --memory=3072 --mount-string /tmp/TestMountStartserial3186482830/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-024767 --memory=3072 --mount-string /tmp/TestMountStartserial3186482830/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.824921284s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-024767 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-026671 --memory=3072 --mount-string /tmp/TestMountStartserial3186482830/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-026671 --memory=3072 --mount-string /tmp/TestMountStartserial3186482830/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.217939685s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-026671 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-024767 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-024767 --alsologtostderr -v=5: (1.72957344s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-026671 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-026671
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-026671: (1.295977027s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-026671
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-026671: (8.069204641s)
--- PASS: TestMountStart/serial/RestartStopped (9.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-026671 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-774332 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1115 10:12:50.068000  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-774332 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m47.234127006s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-774332 -- rollout status deployment/busybox: (3.176428515s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-ppt78 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-xm6kk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-ppt78 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-xm6kk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-ppt78 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-xm6kk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-ppt78 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-ppt78 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-xm6kk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-774332 -- exec busybox-7b57f96db7-xm6kk -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-774332 -v=5 --alsologtostderr
E1115 10:13:32.892210  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-774332 -v=5 --alsologtostderr: (57.76106465s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-774332 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp testdata/cp-test.txt multinode-774332:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp multinode-774332:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3451335709/001/cp-test_multinode-774332.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp multinode-774332:/home/docker/cp-test.txt multinode-774332-m02:/home/docker/cp-test_multinode-774332_multinode-774332-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m02 "sudo cat /home/docker/cp-test_multinode-774332_multinode-774332-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp multinode-774332:/home/docker/cp-test.txt multinode-774332-m03:/home/docker/cp-test_multinode-774332_multinode-774332-m03.txt
E1115 10:14:13.132606  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m03 "sudo cat /home/docker/cp-test_multinode-774332_multinode-774332-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp testdata/cp-test.txt multinode-774332-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp multinode-774332-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3451335709/001/cp-test_multinode-774332-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp multinode-774332-m02:/home/docker/cp-test.txt multinode-774332:/home/docker/cp-test_multinode-774332-m02_multinode-774332.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332 "sudo cat /home/docker/cp-test_multinode-774332-m02_multinode-774332.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp multinode-774332-m02:/home/docker/cp-test.txt multinode-774332-m03:/home/docker/cp-test_multinode-774332-m02_multinode-774332-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m03 "sudo cat /home/docker/cp-test_multinode-774332-m02_multinode-774332-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp testdata/cp-test.txt multinode-774332-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp multinode-774332-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3451335709/001/cp-test_multinode-774332-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp multinode-774332-m03:/home/docker/cp-test.txt multinode-774332:/home/docker/cp-test_multinode-774332-m03_multinode-774332.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332 "sudo cat /home/docker/cp-test_multinode-774332-m03_multinode-774332.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 cp multinode-774332-m03:/home/docker/cp-test.txt multinode-774332-m02:/home/docker/cp-test_multinode-774332-m03_multinode-774332-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 ssh -n multinode-774332-m02 "sudo cat /home/docker/cp-test_multinode-774332-m03_multinode-774332-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-774332 node stop m03: (1.325515942s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-774332 status: exit status 7 (540.147298ms)

                                                
                                                
-- stdout --
	multinode-774332
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-774332-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-774332-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-774332 status --alsologtostderr: exit status 7 (548.313992ms)

                                                
                                                
-- stdout --
	multinode-774332
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-774332-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-774332-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:14:22.246255  622540 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:14:22.246436  622540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:14:22.246491  622540 out.go:374] Setting ErrFile to fd 2...
	I1115 10:14:22.246511  622540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:14:22.246970  622540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:14:22.247306  622540 out.go:368] Setting JSON to false
	I1115 10:14:22.247369  622540 mustload.go:66] Loading cluster: multinode-774332
	I1115 10:14:22.248187  622540 config.go:182] Loaded profile config "multinode-774332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:14:22.248231  622540 status.go:174] checking status of multinode-774332 ...
	I1115 10:14:22.248421  622540 notify.go:221] Checking for updates...
	I1115 10:14:22.250126  622540 cli_runner.go:164] Run: docker container inspect multinode-774332 --format={{.State.Status}}
	I1115 10:14:22.270489  622540 status.go:371] multinode-774332 host status = "Running" (err=<nil>)
	I1115 10:14:22.270517  622540 host.go:66] Checking if "multinode-774332" exists ...
	I1115 10:14:22.270799  622540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-774332
	I1115 10:14:22.290825  622540 host.go:66] Checking if "multinode-774332" exists ...
	I1115 10:14:22.291110  622540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:14:22.291160  622540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774332
	I1115 10:14:22.313463  622540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33634 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/multinode-774332/id_rsa Username:docker}
	I1115 10:14:22.427468  622540 ssh_runner.go:195] Run: systemctl --version
	I1115 10:14:22.434381  622540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:14:22.447326  622540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:14:22.510453  622540 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-15 10:14:22.500013869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:14:22.511047  622540 kubeconfig.go:125] found "multinode-774332" server: "https://192.168.67.2:8443"
	I1115 10:14:22.511094  622540 api_server.go:166] Checking apiserver status ...
	I1115 10:14:22.511143  622540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:14:22.522797  622540 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup
	I1115 10:14:22.530514  622540 api_server.go:182] apiserver freezer: "5:freezer:/docker/88188aea1d81fb53fa88d98f9d3a532764297d0f23a84fa0250c3eb423fd47b4/crio/crio-841187493ca6bfb4153f83a0e32b5f64c4a126bd4509e559806c83dd497e792c"
	I1115 10:14:22.530585  622540 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/88188aea1d81fb53fa88d98f9d3a532764297d0f23a84fa0250c3eb423fd47b4/crio/crio-841187493ca6bfb4153f83a0e32b5f64c4a126bd4509e559806c83dd497e792c/freezer.state
	I1115 10:14:22.537669  622540 api_server.go:204] freezer state: "THAWED"
	I1115 10:14:22.537697  622540 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1115 10:14:22.547348  622540 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1115 10:14:22.547381  622540 status.go:463] multinode-774332 apiserver status = Running (err=<nil>)
	I1115 10:14:22.547392  622540 status.go:176] multinode-774332 status: &{Name:multinode-774332 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:14:22.547411  622540 status.go:174] checking status of multinode-774332-m02 ...
	I1115 10:14:22.547728  622540 cli_runner.go:164] Run: docker container inspect multinode-774332-m02 --format={{.State.Status}}
	I1115 10:14:22.564464  622540 status.go:371] multinode-774332-m02 host status = "Running" (err=<nil>)
	I1115 10:14:22.564491  622540 host.go:66] Checking if "multinode-774332-m02" exists ...
	I1115 10:14:22.564775  622540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-774332-m02
	I1115 10:14:22.580789  622540 host.go:66] Checking if "multinode-774332-m02" exists ...
	I1115 10:14:22.581195  622540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:14:22.581241  622540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-774332-m02
	I1115 10:14:22.598627  622540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33639 SSHKeyPath:/home/jenkins/minikube-integration/21895-514793/.minikube/machines/multinode-774332-m02/id_rsa Username:docker}
	I1115 10:14:22.703436  622540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:14:22.716425  622540 status.go:176] multinode-774332-m02 status: &{Name:multinode-774332-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:14:22.716460  622540 status.go:174] checking status of multinode-774332-m03 ...
	I1115 10:14:22.716762  622540 cli_runner.go:164] Run: docker container inspect multinode-774332-m03 --format={{.State.Status}}
	I1115 10:14:22.741179  622540 status.go:371] multinode-774332-m03 host status = "Stopped" (err=<nil>)
	I1115 10:14:22.741206  622540 status.go:384] host is not running, skipping remaining checks
	I1115 10:14:22.741213  622540 status.go:176] multinode-774332-m03 status: &{Name:multinode-774332-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-774332 node start m03 -v=5 --alsologtostderr: (7.124326236s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-774332
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-774332
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-774332: (25.130240269s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-774332 --wait=true -v=5 --alsologtostderr
E1115 10:15:29.825169  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-774332 --wait=true -v=5 --alsologtostderr: (54.957351216s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-774332
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-774332 node delete m03: (4.920978415s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-774332 stop: (23.851585879s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-774332 status: exit status 7 (86.497557ms)

                                                
                                                
-- stdout --
	multinode-774332
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-774332-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-774332 status --alsologtostderr: exit status 7 (96.935415ms)

                                                
                                                
-- stdout --
	multinode-774332
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-774332-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:16:20.532082  630327 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:16:20.532453  630327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:16:20.532488  630327 out.go:374] Setting ErrFile to fd 2...
	I1115 10:16:20.532508  630327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:16:20.532789  630327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:16:20.533002  630327 out.go:368] Setting JSON to false
	I1115 10:16:20.533056  630327 mustload.go:66] Loading cluster: multinode-774332
	I1115 10:16:20.533460  630327 config.go:182] Loaded profile config "multinode-774332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:16:20.533500  630327 status.go:174] checking status of multinode-774332 ...
	I1115 10:16:20.534128  630327 cli_runner.go:164] Run: docker container inspect multinode-774332 --format={{.State.Status}}
	I1115 10:16:20.534423  630327 notify.go:221] Checking for updates...
	I1115 10:16:20.552543  630327 status.go:371] multinode-774332 host status = "Stopped" (err=<nil>)
	I1115 10:16:20.552568  630327 status.go:384] host is not running, skipping remaining checks
	I1115 10:16:20.552576  630327 status.go:176] multinode-774332 status: &{Name:multinode-774332 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 10:16:20.552604  630327 status.go:174] checking status of multinode-774332-m02 ...
	I1115 10:16:20.552953  630327 cli_runner.go:164] Run: docker container inspect multinode-774332-m02 --format={{.State.Status}}
	I1115 10:16:20.575242  630327 status.go:371] multinode-774332-m02 host status = "Stopped" (err=<nil>)
	I1115 10:16:20.575266  630327 status.go:384] host is not running, skipping remaining checks
	I1115 10:16:20.575274  630327 status.go:176] multinode-774332-m02 status: &{Name:multinode-774332-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-774332 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-774332 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.961132719s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-774332 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-774332
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-774332-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-774332-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.214253ms)

                                                
                                                
-- stdout --
	* [multinode-774332-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-774332-m02' is duplicated with machine name 'multinode-774332-m02' in profile 'multinode-774332'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-774332-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-774332-m03 --driver=docker  --container-runtime=crio: (33.419662079s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-774332
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-774332: exit status 80 (345.798101ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-774332 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-774332-m03 already exists in multinode-774332-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-774332-m03
E1115 10:17:50.067977  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-774332-m03: (2.094373421s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.01s)

                                                
                                    
x
+
TestPreload (130.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-968851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-968851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m5.909163028s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-968851 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-968851 image pull gcr.io/k8s-minikube/busybox: (2.591723108s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-968851
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-968851: (5.922120918s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-968851 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-968851 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (53.062005172s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-968851 image list
helpers_test.go:175: Cleaning up "test-preload-968851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-968851
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-968851: (2.50555152s)
--- PASS: TestPreload (130.25s)

                                                
                                    
x
+
TestScheduledStopUnix (110.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-857291 --memory=3072 --driver=docker  --container-runtime=crio
E1115 10:20:29.829773  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-857291 --memory=3072 --driver=docker  --container-runtime=crio: (34.198576943s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-857291 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 10:20:39.099047  644306 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:20:39.099234  644306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:20:39.099247  644306 out.go:374] Setting ErrFile to fd 2...
	I1115 10:20:39.099253  644306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:20:39.099528  644306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:20:39.099790  644306 out.go:368] Setting JSON to false
	I1115 10:20:39.099904  644306 mustload.go:66] Loading cluster: scheduled-stop-857291
	I1115 10:20:39.100239  644306 config.go:182] Loaded profile config "scheduled-stop-857291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:20:39.100317  644306 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/config.json ...
	I1115 10:20:39.100491  644306 mustload.go:66] Loading cluster: scheduled-stop-857291
	I1115 10:20:39.100607  644306 config.go:182] Loaded profile config "scheduled-stop-857291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-857291 -n scheduled-stop-857291
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-857291 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 10:20:39.561082  644395 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:20:39.561237  644395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:20:39.561262  644395 out.go:374] Setting ErrFile to fd 2...
	I1115 10:20:39.561269  644395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:20:39.561540  644395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:20:39.561851  644395 out.go:368] Setting JSON to false
	I1115 10:20:39.562914  644395 daemonize_unix.go:73] killing process 644322 as it is an old scheduled stop
	I1115 10:20:39.563031  644395 mustload.go:66] Loading cluster: scheduled-stop-857291
	I1115 10:20:39.563446  644395 config.go:182] Loaded profile config "scheduled-stop-857291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:20:39.563525  644395 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/config.json ...
	I1115 10:20:39.563688  644395 mustload.go:66] Loading cluster: scheduled-stop-857291
	I1115 10:20:39.563818  644395 config.go:182] Loaded profile config "scheduled-stop-857291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1115 10:20:39.571430  516637 retry.go:31] will retry after 63.989µs: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.571586  516637 retry.go:31] will retry after 136.362µs: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.576518  516637 retry.go:31] will retry after 237.861µs: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.577668  516637 retry.go:31] will retry after 479.246µs: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.578789  516637 retry.go:31] will retry after 269.678µs: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.579911  516637 retry.go:31] will retry after 551.355µs: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.581063  516637 retry.go:31] will retry after 1.678624ms: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.583663  516637 retry.go:31] will retry after 2.003389ms: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.585782  516637 retry.go:31] will retry after 3.004622ms: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.588919  516637 retry.go:31] will retry after 4.373722ms: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.594145  516637 retry.go:31] will retry after 3.549149ms: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.598397  516637 retry.go:31] will retry after 9.200306ms: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.608734  516637 retry.go:31] will retry after 15.733433ms: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.624966  516637 retry.go:31] will retry after 26.971604ms: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
I1115 10:20:39.652198  516637 retry.go:31] will retry after 25.867136ms: open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-857291 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-857291 -n scheduled-stop-857291
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-857291
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-857291 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 10:21:05.477563  644761 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:21:05.477732  644761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:21:05.477741  644761 out.go:374] Setting ErrFile to fd 2...
	I1115 10:21:05.477746  644761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:21:05.478049  644761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:21:05.478391  644761 out.go:368] Setting JSON to false
	I1115 10:21:05.478488  644761 mustload.go:66] Loading cluster: scheduled-stop-857291
	I1115 10:21:05.478824  644761 config.go:182] Loaded profile config "scheduled-stop-857291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:21:05.478898  644761 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/scheduled-stop-857291/config.json ...
	I1115 10:21:05.479163  644761 mustload.go:66] Loading cluster: scheduled-stop-857291
	I1115 10:21:05.479288  644761 config.go:182] Loaded profile config "scheduled-stop-857291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-857291
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-857291: exit status 7 (70.919826ms)

                                                
                                                
-- stdout --
	scheduled-stop-857291
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-857291 -n scheduled-stop-857291
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-857291 -n scheduled-stop-857291: exit status 7 (65.138353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-857291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-857291
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-857291: (4.341114346s)
--- PASS: TestScheduledStopUnix (110.13s)

                                                
                                    
x
+
TestInsufficientStorage (13.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-087164 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-087164 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.345563657s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d8fc4442-8b81-4bb7-88a7-3440cead9b3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-087164] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7cc77bcd-fcaa-40c7-bf38-5729b774e27b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21895"}}
	{"specversion":"1.0","id":"14651e1f-c764-4608-b090-fea6e3195b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d7156f8b-dfd5-4481-b32d-85589086a8a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig"}}
	{"specversion":"1.0","id":"41f033cd-66b1-46ac-9eb7-4e55a01a9742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube"}}
	{"specversion":"1.0","id":"aaeb6a4b-c8ee-471d-b938-3aa070293eb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"93636727-8388-4f39-92b7-e39c79ec21ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"273c60a9-910a-4240-a0cc-7a2bf028a3e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"15bf2bac-431c-4e49-bc7d-775917437995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ab3ad0a2-39de-4b46-92e2-83638df4c033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3fe0a591-b9df-4e57-91a5-f599768e3dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"58c86580-f93d-4a15-9211-d85f09af3b8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-087164\" primary control-plane node in \"insufficient-storage-087164\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b90b918-9c30-49fa-8b6f-ddf31eba0975","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ec90d2e-a8a7-4e4e-89a1-d724b7ead5ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee9853b4-f6dc-4624-81f7-4877d94c3772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-087164 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-087164 --output=json --layout=cluster: exit status 7 (307.288128ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-087164","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-087164","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 10:22:06.606097  646485 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-087164" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-087164 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-087164 --output=json --layout=cluster: exit status 7 (313.469277ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-087164","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-087164","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 10:22:06.919497  646550 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-087164" does not appear in /home/jenkins/minikube-integration/21895-514793/kubeconfig
	E1115 10:22:06.929311  646550 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/insufficient-storage-087164/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-087164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-087164
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-087164: (1.967309868s)
--- PASS: TestInsufficientStorage (13.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.205481777 start -p running-upgrade-528342 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1115 10:25:29.824661  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.205481777 start -p running-upgrade-528342 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.680048798s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-528342 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-528342 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.018509664s)
helpers_test.go:175: Cleaning up "running-upgrade-528342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-528342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-528342: (1.957820096s)
--- PASS: TestRunningBinaryUpgrade (55.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (368.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.466467481s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-480353
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-480353: (3.741664158s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-480353 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-480353 status --format={{.Host}}: exit status 7 (125.861375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.161287564s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-480353 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (135.487303ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-480353] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-480353
	    minikube start -p kubernetes-upgrade-480353 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4803532 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-480353 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-480353 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.560502954s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-480353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-480353
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-480353: (2.443318586s)
--- PASS: TestKubernetesUpgrade (368.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (125.47s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1095873811 start -p missing-upgrade-372439 --memory=3072 --driver=docker  --container-runtime=crio
E1115 10:22:50.067937  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1095873811 start -p missing-upgrade-372439 --memory=3072 --driver=docker  --container-runtime=crio: (1m5.897266587s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-372439
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-372439: (1.019119518s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-372439
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-372439 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-372439 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.224662156s)
helpers_test.go:175: Cleaning up "missing-upgrade-372439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-372439
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-372439: (2.132184219s)
--- PASS: TestMissingContainerUpgrade (125.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759398 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-759398 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (106.071518ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-759398] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759398 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-759398 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (46.030996897s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-759398 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759398 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-759398 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.919884278s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-759398 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-759398 status -o json: exit status 2 (348.206242ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-759398","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-759398
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-759398: (2.258997006s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759398 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-759398 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.928803843s)
--- PASS: TestNoKubernetes/serial/Start (8.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21895-514793/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-759398 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-759398 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.319608ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-759398
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-759398: (1.300590273s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759398 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-759398 --driver=docker  --container-runtime=crio: (6.868943495s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-759398 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-759398 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.383ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2971603138 start -p stopped-upgrade-063492 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2971603138 start -p stopped-upgrade-063492 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.875343337s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2971603138 -p stopped-upgrade-063492 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2971603138 -p stopped-upgrade-063492 stop: (1.227834702s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-063492 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-063492 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.84479388s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-063492
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-063492: (1.222875306s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestPause/serial/Start (84.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-742370 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-742370 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.317039259s)
--- PASS: TestPause/serial/Start (84.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-742370 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1115 10:27:50.068593  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-742370 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.793804499s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-864099 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-864099 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (204.656736ms)

                                                
                                                
-- stdout --
	* [false-864099] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:29:18.270476  685224 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:29:18.270663  685224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:29:18.270689  685224 out.go:374] Setting ErrFile to fd 2...
	I1115 10:29:18.270708  685224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:29:18.271005  685224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-514793/.minikube/bin
	I1115 10:29:18.271446  685224 out.go:368] Setting JSON to false
	I1115 10:29:18.272398  685224 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18710,"bootTime":1763183849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1115 10:29:18.272488  685224 start.go:143] virtualization:  
	I1115 10:29:18.276068  685224 out.go:179] * [false-864099] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1115 10:29:18.279219  685224 notify.go:221] Checking for updates...
	I1115 10:29:18.279692  685224 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:29:18.282882  685224 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:29:18.285707  685224 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-514793/kubeconfig
	I1115 10:29:18.288567  685224 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-514793/.minikube
	I1115 10:29:18.291491  685224 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1115 10:29:18.294386  685224 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:29:18.297878  685224 config.go:182] Loaded profile config "kubernetes-upgrade-480353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:29:18.298014  685224 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:29:18.332982  685224 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1115 10:29:18.333107  685224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:29:18.395666  685224 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-15 10:29:18.386170327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1115 10:29:18.395775  685224 docker.go:319] overlay module found
	I1115 10:29:18.399232  685224 out.go:179] * Using the docker driver based on user configuration
	I1115 10:29:18.402273  685224 start.go:309] selected driver: docker
	I1115 10:29:18.402303  685224 start.go:930] validating driver "docker" against <nil>
	I1115 10:29:18.402318  685224 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:29:18.405799  685224 out.go:203] 
	W1115 10:29:18.408808  685224 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1115 10:29:18.411637  685224 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-864099 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-864099" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:29:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-480353
contexts:
- context:
cluster: kubernetes-upgrade-480353
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:29:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-480353
name: kubernetes-upgrade-480353
current-context: kubernetes-upgrade-480353
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-480353
user:
client-certificate: /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/kubernetes-upgrade-480353/client.crt
client-key: /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/kubernetes-upgrade-480353/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-864099

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-864099"

                                                
                                                
----------------------- debugLogs end: false-864099 [took: 3.59369754s] --------------------------------
helpers_test.go:175: Cleaning up "false-864099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-864099
--- PASS: TestNetworkPlugins/group/false (4.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1115 10:30:53.134792  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m0.965174184s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-448285 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [db53b178-99fd-42b5-b5fc-37264803a8a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [db53b178-99fd-42b5-b5fc-37264803a8a3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004044234s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-448285 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-448285 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-448285 --alsologtostderr -v=3: (12.186393386s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-448285 -n old-k8s-version-448285
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-448285 -n old-k8s-version-448285: exit status 7 (75.832217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-448285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1115 10:32:50.068500  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-448285 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.454051378s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-448285 -n old-k8s-version-448285
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-l44x5" [eb4f1b09-dbba-4d40-a2fa-e31fbc421449] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0034533s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-l44x5" [eb4f1b09-dbba-4d40-a2fa-e31fbc421449] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004559101s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-448285 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-448285 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m15.475396714s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.98484588s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-907610 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9f8722f6-c3d5-4376-a8a0-64c12d93558c] Pending
helpers_test.go:352: "busybox" [9f8722f6-c3d5-4376-a8a0-64c12d93558c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9f8722f6-c3d5-4376-a8a0-64c12d93558c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003935395s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-907610 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-907610 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-907610 --alsologtostderr -v=3: (12.021681844s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-907610 -n no-preload-907610
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-907610 -n no-preload-907610: exit status 7 (74.673336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-907610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-907610 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.097724655s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-907610 -n no-preload-907610
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-531596 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [70b4f86d-cd24-4414-8fb3-e393fdc4fbe1] Pending
helpers_test.go:352: "busybox" [70b4f86d-cd24-4414-8fb3-e393fdc4fbe1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [70b4f86d-cd24-4414-8fb3-e393fdc4fbe1] Running
E1115 10:35:29.824896  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.028381192s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-531596 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-531596 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-531596 --alsologtostderr -v=3: (12.027668858s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-531596 -n embed-certs-531596
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-531596 -n embed-certs-531596: exit status 7 (67.185304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-531596 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (58.687626083s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-531596 -n embed-certs-531596
E1115 10:36:48.453161  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:48.615212  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nf42b" [a18b3230-1ea5-4199-abb6-f03a528c964f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003562995s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nf42b" [a18b3230-1ea5-4199-abb6-f03a528c964f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004141905s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-907610 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-907610 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:36:48.290658  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:48.296958  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:48.308264  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:48.329619  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:48.370954  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.937153091s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-57w6h" [b33d94a0-d2c6-4220-b732-0427f005a96c] Running
E1115 10:36:48.937266  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:49.579269  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:50.861482  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:36:53.423634  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003461475s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-57w6h" [b33d94a0-d2c6-4220-b732-0427f005a96c] Running
E1115 10:36:58.545301  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004246381s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-531596 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-531596 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1115 10:37:29.269895  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (38.255106898s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-303164 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f3cee1f1-d6f9-47b9-8bb8-b3314819f561] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f3cee1f1-d6f9-47b9-8bb8-b3314819f561] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003519991s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-303164 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-395885 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-395885 --alsologtostderr -v=3: (1.491750329s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-395885 -n newest-cni-395885
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-395885 -n newest-cni-395885: exit status 7 (79.506563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-395885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-395885 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.99154638s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-395885 -n newest-cni-395885
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-303164 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-303164 --alsologtostderr -v=3: (12.452749241s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-395885 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164: exit status 7 (111.51178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-303164 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-303164 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.399340732s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-303164 -n default-k8s-diff-port-303164
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.976902176s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4mmm8" [794a2f16-96be-4a3a-822c-5499be15dc22] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004154668s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4mmm8" [794a2f16-96be-4a3a-822c-5499be15dc22] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003655766s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-303164 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-303164 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1115 10:39:32.153500  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:41.881163  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:41.887600  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:41.898966  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:41.920337  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:41.961700  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:42.043283  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:42.204628  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:42.525998  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:43.168250  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:39:44.449580  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.446318733s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-864099 "pgrep -a kubelet"
I1115 10:39:45.221203  516637 config.go:182] Loaded profile config "auto-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-864099 replace --force -f testdata/netcat-deployment.yaml
I1115 10:39:45.612230  516637 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-76qh4" [706a5813-e6a5-4b32-8378-6773606d0b1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1115 10:39:47.011696  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-76qh4" [706a5813-e6a5-4b32-8378-6773606d0b1d] Running
E1115 10:39:52.133887  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.002924726s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-864099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1115 10:40:22.857978  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:40:29.824565  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/addons-612806/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.376474153s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-ffs4l" [3d9233be-4731-4852-ac7e-e533c8b422f1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006857774s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-864099 "pgrep -a kubelet"
I1115 10:40:57.553340  516637 config.go:182] Loaded profile config "kindnet-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-864099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hql5j" [fc963f33-ff40-4f04-a550-26487f666590] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1115 10:41:03.819456  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-hql5j" [fc963f33-ff40-4f04-a550-26487f666590] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003808602s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-864099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-wt4jx" [51dc8319-be15-47d0-9521-ff96c41dfdf4] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-wt4jx" [51dc8319-be15-47d0-9521-ff96c41dfdf4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004355788s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-864099 "pgrep -a kubelet"
I1115 10:41:30.761159  516637 config.go:182] Loaded profile config "calico-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-864099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-drnqq" [00d665f5-fb85-4038-8ac7-d1bcdff44311] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-drnqq" [00d665f5-fb85-4038-8ac7-d1bcdff44311] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005788098s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m8.64685887s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-864099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (45.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1115 10:42:15.994883  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/old-k8s-version-448285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:25.741483  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (45.587460778s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (45.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-864099 "pgrep -a kubelet"
I1115 10:42:42.191051  516637 config.go:182] Loaded profile config "custom-flannel-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-864099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kbktj" [f636f93a-f205-46fa-85bc-e0c8dbe6f0c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kbktj" [f636f93a-f205-46fa-85bc-e0c8dbe6f0c5] Running
E1115 10:42:49.194883  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:49.201297  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:49.212765  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:49.234241  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:49.275752  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:49.357135  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:49.518759  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:49.840239  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:50.067785  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/functional-755106/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:50.481560  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:42:51.763119  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.002998059s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-864099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-864099 "pgrep -a kubelet"
I1115 10:42:55.332136  516637 config.go:182] Loaded profile config "enable-default-cni-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-864099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kw9np" [48a41abd-8c6e-418c-b46b-f1011c58055c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1115 10:42:59.446857  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kw9np" [48a41abd-8c6e-418c-b46b-f1011c58055c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003927854s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-864099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.571683676s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1115 10:43:30.170651  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:11.134305  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/default-k8s-diff-port-303164/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-864099 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.809590296s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-8g7d4" [5db8a0ca-e168-4f16-9953-6cfd19d8bfd3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003465478s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-864099 "pgrep -a kubelet"
I1115 10:44:34.069186  516637 config.go:182] Loaded profile config "flannel-864099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-864099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7p5r4" [bd55ffe0-cd40-4c3a-93ac-4931f07af455] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7p5r4" [bd55ffe0-cd40-4c3a-93ac-4931f07af455] Running
E1115 10:44:41.880810  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/no-preload-907610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004241841s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-864099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-864099 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-864099 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vjtxr" [7a086bb0-5712-4ff7-9d96-25ed15db4235] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1115 10:44:45.573179  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:45.579525  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:45.591163  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:45.612525  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:45.653964  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:45.735345  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:45.896769  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:46.218603  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:46.860108  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:44:48.142243  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vjtxr" [7a086bb0-5712-4ff7-9d96-25ed15db4235] Running
E1115 10:44:50.704338  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004435199s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-864099 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1115 10:44:55.826309  516637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/auto-864099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-864099 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-650018 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-650018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-650018
--- SKIP: TestDownloadOnlyKic (0.67s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-167523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-167523
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-864099 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-864099" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:29:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-480353
contexts:
- context:
cluster: kubernetes-upgrade-480353
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:29:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-480353
name: kubernetes-upgrade-480353
current-context: kubernetes-upgrade-480353
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-480353
user:
client-certificate: /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/kubernetes-upgrade-480353/client.crt
client-key: /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/kubernetes-upgrade-480353/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-864099

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-864099"

                                                
                                                
----------------------- debugLogs end: kubenet-864099 [took: 3.419676476s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-864099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-864099
--- SKIP: TestNetworkPlugins/group/kubenet (3.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-864099 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-864099" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21895-514793/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:29:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-480353
contexts:
- context:
cluster: kubernetes-upgrade-480353
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:29:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-480353
name: kubernetes-upgrade-480353
current-context: kubernetes-upgrade-480353
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-480353
user:
client-certificate: /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/kubernetes-upgrade-480353/client.crt
client-key: /home/jenkins/minikube-integration/21895-514793/.minikube/profiles/kubernetes-upgrade-480353/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-864099

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-864099" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864099"

                                                
                                                
----------------------- debugLogs end: cilium-864099 [took: 3.755203122s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-864099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-864099
--- SKIP: TestNetworkPlugins/group/cilium (3.91s)

                                                
                                    
Copied to clipboard